scispace - formally typeset
Search or ask a question
Author

Kumar K. Tamma

Bio: Kumar K. Tamma is an academic researcher from University of Minnesota. The author has contributed to research in topics: Finite element method & Nonlinear system. The author has an hindex of 33, co-authored 344 publications receiving 4255 citations. Previous affiliations of Kumar K. Tamma include West Virginia University & United States Army Research Laboratory.


Papers
More filters
Journal ArticleDOI
TL;DR: The design leading to optimal algorithms in the context of a generalized single step single solve framework and within the limitation of the Dahlquist barrier is described for structural dynamics computations; thereby, providing closure to the class of LMS methods.
Abstract: The primary objectives of the present exposition are to: (i) provide a generalized unified mathematical framework and setting leading to the unique design of computational algorithms for structural dynamic problems encompassing the broad scope of linear multi-step (LMS) methods and within the limitation of the Dahlquist barrier theorem (Reference [3], G. Dahlquist, BIT 1963; 3: 27), and also leading to new designs of numerically dissipative methods with optimal algorithmic attributes that cannot be obtained employing existing frameworks in the literature, (ii) provide a meaningful characterization of various numerical dissipative/non-dissipative time integration algorithms both new and existing in the literature based on the overshoot behavior of algorithms leading to the notion of algorithms by design, (iii) provide design guidelines on selection of algorithms for structural dynamic analysis within the scope of LMS methods. For structural dynamics problems, first the so-called linear multi-step methods (LMS) are proven to be spectrally identical to a newly developed family of generalized single step single solve (GSSSS) algorithms. The design, synthesis and analysis of the unified framework of computational algorithms based on the overshooting behavior, and additional algorithmic properties such as second-order accuracy, and unconditional stability with numerical dissipative features yields three sub-classes of practical computational algorithms: (i) zero-order displacement and velocity overshoot (U0-V0) algorithms; (ii) zero-order displacement and first-order velocity overshoot (U0-V1) algorithms; and (iii) first-order displacement and zero-order velocity overshoot (U1-V0) algorithms (the remainder involving high-orders of overshooting behavior are not considered to be competitive from practical considerations). Within each sub-class of algorithms, further distinction is made between the design leading to optimal numerical dissipative and dispersive algorithms, the continuous acceleration algorithms and the discontinuous acceleration algorithms that are subsets, and correspond to the designed placement of the spurious root at the low-frequency limit or the high-frequency limit, respectively. The conclusion and design guidelines demonstrating that the U0-V1 algorithms are only suitable for given initial velocity problems, the U1-V0 algorithms are only suitable for given initial displacement problems, and the U0-V0 algorithms are ideal for either or both cases of given initial displacement and initial velocity problems are finally drawn. For the first time, the design leading to optimal algorithms in the context of a generalized single step single solve framework and within the limitation of the Dahlquist barrier that maintains second-order accuracy and unconditional stability with/without numerically dissipative features is described for structural dynamics computations; thereby, providing closure to the class of LMS methods. Copyright © 2003 John Wiley & Sons, Ltd.

164 citations

Journal ArticleDOI
TL;DR: In this article, a recipe of the asymptotic expansion homogenization (AEH) approach is presented that can be used for future developments in many areas of material and geometric nonlinear continuum mechanics.
Abstract: Developments in asymptotic expansion homogenization (AEH) are overviewed in the context of engineering multi-scale problems. The problems of multi-scales presently considered are those linking continuum level descriptions at two different length scales. Concurrent research in the literature is first described. A recipe of the AEH approach is then presented that can be used for future developments in many areas of material and geometric non-linear continuum mechanics. Then, a derivation is outlined using the finite element method that is useful for engineering applications that leads to coupled hierarchical partial differential equations in elasticity. The approach provides causal relationships between macro and micro scales wherein procedures for homogenization of properties and localization of small-scale response are built-in. A brief discussion of a physical paradox is introduced in the estimation of micro-stresses that tends to be a barrier in the understanding of the method. Computational issues are highlighted and illustrative applications in linear elasticity are then presented for composites containing microstructures with complex geometries.

148 citations

Journal ArticleDOI
TL;DR: The present exposition overviews new and recent advances describing a standardized formal theory towards the evolution, classification, characterization and generic design of time discretized operators for transient/dynamic applications and explains a wide variety of generalized integration operators in time.
Abstract: Via new perspectives, for the time dimension, the present exposition overviews new and recent advances describing a standardized formal theory towards the evolution, classification, characterization and generic design of time discretized operators for transient/dynamic applications. Of fundamental importance in the present exposition are the developments encompassing the evolution of time discretized operators leading to the theoretical design of computational algorithms and their subsequent classification and characterization. And, the overall developments are new and significantly different from the way traditional modal type and a wide variety of step-by-step time marching approaches which we are mostly familiar with have been developed and described in the research literature and in standard text books over the years. The theoretical ideas and basis towards the evolution of a generalized methodology and formulations emanate under the umbrella and framework and are explained via a generalized time weighted philosophy encompassing the semi-discretized equations pertinent to transient/dynamic systems. It is herein hypothesized that integral operators and the associated representations and a wide variety of the so-called integration operators pertain to and emanate from the same family, with the burden which is being carried by a virtual field or weighted time field specifically introduced for the time discretization is strictly enacted in a mathematically consistent manner so as to first permit obtaining the adjoint operator of the original semi-discretized equation system. Subsequently, the selection or burden carried by the virtual or weighted time fields originally introduced to facilitate the time discretization process determines the formal development and outcome of “exact integral operators”, “approximate integral operators”, including providing avenues leading to the design of new computational algorithms which have not been exploited and/or explored to-date and the recovery of most of the existing algorithms, and also bridging the relationships systematically leading to the evolution of a wide variety of “integration operators”. Thus, the overall developments not only serve as a prelude towards the formal developments for “exact integral operators”, but also demonstrate that the resulting “approximate integral operators” and a wide variety of “new and existing integration operators and known methods” are simply subsets of the generalizations of a standardizedW p -Family, and emanate from the principles presented herein. The developments first leading to integral operators in time, and the resulting consequences then systematically leading to not only providing new avenues but additionally also explaining a wide variety of generalized integration operators in time of which single-step time integration operators and various widely recognized algorithms which we are familiar are simply subsets, the associated multi-step time integration operators, and a class of finite element in time integration operators, and their relationships are particularly addressed. The theoretical design developments encompass and explain a variety of time discretized operators, the recovery of various original methods of algorithmic development, and the development of new computational algorithms which have not been exploited and/or explored to-date, and furthermore, permit time discretized operators to be uniquely classified and characterized by algorithmic markers. The resulting and so-called discrete numerically assigned [DNA] algorithmic markers not only serve as a prelude towards providing a standardized formal theory of development of time discretized operators and forum for selecting and identifying time discretized operators, but also permit lucid communication when referring to various time discretized operators. That which constitutes characterization of time discretized operators are the so-called DNA algorithmic markers which essentially comprise of both: (i) the weighted time fields introduced for enacting the time discretization process, and (ii) the corresponding conditions (if any) these weighted time fields impose (dictate) upon the approximations for the dependent field variables and updates in the theoretical development of time discretized operators. As such, recent advances encompassing the theoretical design and development of computational algorithms for transient/dynamic analysis of time dependent phenomenon encountered in engineering, mathematical and physical sciences are overviewed.

123 citations

Journal ArticleDOI
TL;DR: In this article, a generalized two-step relaxation and retardation time-based heating model is proposed for both macro-scale and micro-scale heat conduction, with emphasis on the proposition of a Generalized Two-Step relaxation/retraction time based heating model.
Abstract: Some noteworthy and historical perspectives and an overview of macroscale and microscale heat transport behavior in materials and structures are presented. The topic of heat waves is also discussed. The significance of constitutive models for both macroscale and microscale heat conduction are described in conjunction with generalizations drawn concerning the physical relevance and the role of relaxation and retardation times emanating from the Jeffreys type heat flux constitutive model, with consequences to the Cattaneo heat flux model and subsequently to the Fourier heat flux model. Both macroscopic model formulations for applications to macroscopic heat conduction problems and two-step models for use in specialized applications to account for microscale heat transport mechanisms are overviewed with emphasis on the proposition of a Generalized Two-Step relaxation / retardation time-based heating model. So as to bring forth a variety of issues in a single forum, illustrative numerical applications are ove...

111 citations

01 Jan 1999
TL;DR: In this article, the Stokes equation is used to model the flow in the inter-tow region of the unit cell, and in the intra tow region, the Brinkman's equation was used.
Abstract: A good understanding of woven fiber preform permeabilities is critical in the design and optimization of the composite molding processes encountered in resin transfer molding (RTM); yet these issues remain unresolved in the literature. Many have attempted to address permeability predictions for flat undeformed fiber preform, but few have investigated permeability variations for complex geometries of porous fibrous media. In this study, the objectives are to: (i) provide a brief review of existing methods for the prediction of the fiber mat permeability; (ii) postulate a more realistic representation of a unit cell to account for such fabric structures as crimp, tow spacing and the like; and (iii) apply computational approximations to predict effective permeabilities for use in modeling of structural composites manufacturing processes. The Stokes equation is used to model the flow in the inter-tow region of the unit cell, and in the intra-tow region, the Brinkman's equation is used. Initial permeability calculations are performed for a three-dimensional unit cell model representative of the PET-61 woven fabric composite. The results show good agreement with experimental data published in the literature.

110 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: In this paper, a review of the history of thermal energy storage with solid-liquid phase change has been carried out and three aspects have been the focus of this review: materials, heat transfer and applications.

4,019 citations

Book ChapterDOI
01 Jan 1997
TL;DR: This chapter introduces the finite element method (FEM) as a tool for solution of classical electromagnetic problems and discusses the main points in the application to electromagnetic design, including formulation and implementation.
Abstract: This chapter introduces the finite element method (FEM) as a tool for solution of classical electromagnetic problems. Although we discuss the main points in the application of the finite element method to electromagnetic design, including formulation and implementation, those who seek deeper understanding of the finite element method should consult some of the works listed in the bibliography section.

1,820 citations

Journal ArticleDOI

1,604 citations