scispace - formally typeset
Search or ask a question
Author

Roman Z. Morawski

Other affiliations: Université du Québec
Bio: Roman Z. Morawski is an academic researcher from Warsaw University of Technology. The author has contributed to research in topics: Deconvolution & Tikhonov regularization. The author has an hindex of 14, co-authored 102 publications receiving 903 citations. Previous affiliations of Roman Z. Morawski include Université du Québec.


Papers
More filters
Journal ArticleDOI
18 May 1993
TL;DR: In this article, a general scheme of measurement is proposed that emphasizes the key role of measurand reconstruction, and two classes of them, being of particular importance in practical applications, are identified and discussed in more detail.
Abstract: The basic notions of measurement science are overviewed. A general scheme of measurement is proposed that emphasizes the key role of measurand reconstruction. The problems of measurand reconstruction are classified. Two classes of them, being of particular importance in practical applications, are identified and discussed in more detail. These are the nonlinear reconstruction of a scalar static measurand in the presence of a scalar influence quantity, and the linear reconstruction of a scalar dynamic measurand. Considerations of a tutorial and theoretical nature are illustrated with practical examples. >

74 citations

Patent
19 May 1998
TL;DR: In this article, an apparatus and method for in situ spectral measurement is disclosed, which uses a low-resolution grating to disperse light and thereby image a spectrum thereof, which is converted into a digital electrical signal and is processed in order to enhance the spectral information.
Abstract: An apparatus and method for in situ spectral measurement is disclosed. The apparatus uses a low-resolution grating to disperse light and thereby image a spectrum thereof. The imaged spectrum is converted into a digital electrical signal and is processed in order to enhance the spectral information. The resulting spectral information is analogous to that captured using a higher resolution spectral imager with optical processing of the spectral data.

46 citations

Journal ArticleDOI
TL;DR: The main objective of this paper is to review the state-of-the-art methodology for digital signal processing (DSP) when applied to data provided by spectrophotometric transducers and spectrophots, based on DSP-oriented models of spectrophOTometric data.
Abstract: Spectrophotometry is more and more often the method of choice not only in analysis of (bio)chemical substances, but also in the identification of physical properties of various objects and their classification. The applications of spectrophotometry include such diversified tasks as monitoring of optical telecommunications links, assessment of eating quality of food, forensic classification of papers, biometric identification of individuals, detection of insect infestation of seeds and classification of textiles. In all those applications, large numbers of data, generated by spectrophotometers, are processed by various digital means in order to extract measurement information. The main objective of this paper is to review the state-of-the-art methodology for digital signal processing (DSP) when applied to data provided by spectrophotometric transducers and spectrophotometers. First, a general methodology of DSP applications in spectrophotometry, based on DSP-oriented models of spectrophotometric data, is outlined. Then, the most important classes of DSP methods for processing spectrophotometric data—the methods for DSP-aided calibration of spectrophotometric instrumentation, the methods for the estimation of spectra on the basis of spectrophotometric data, the methods for the estimation of spectrum-related measurands on the basis of spectrophotometric data—are presented. Finally, the methods for preprocessing and postprocessing of spectrophotometric data are overviewed. Throughout the review, the applications of DSP are illustrated with numerous examples related to broadly understood spectrophotometry.

37 citations

Journal ArticleDOI
04 Jun 1996
TL;DR: In this article, Tikhonov deconvolution is used for transforming the processed spectrogram in such a way as to facilitate finding initial estimates of its parameters, i.e., gains in accuracy of estimating the parameters of peaks, are demonstrated using both synthetic and real-world spectrophotometric data.
Abstract: The problem of spectrogram interpretation is considered under the assumption that the parameters of spectral peaks-their positions and magnitudes-contain the information essential for spectrometric analysis. The subsequent use of Tikhonov deconvolution and iterative correction of the estimates of those parameters is proposed. Deconvolution is used for transforming the processed spectrogram in such a way as to facilitate finding initial estimates of its parameters. The advantages of the proposed approach, i.e., gains in accuracy of estimating the parameters of peaks, are demonstrated using both synthetic and real-world spectrophotometric data.

35 citations

Journal ArticleDOI
TL;DR: Six algorithms most frequently used for instrumental applications and their metrological and numerical properties are selected for closer analysis and conclusions concerning computational complexity and accuracy of the compared algorithms are drawn.
Abstract: SUMMARYDeconvolution algorithms for measurand reconstruction are considered. Their metrological andnumerical properties are briefly characterized. Six algorithms most frequently used for instrumentalapplications are selected for closer analysis. Their comparative study is based on the use of spectrometric-type synthetic data, calorimetric-type synthetic data and spectrometric real-world data. Conclusionsconcerning computational complexity and accuracy of the compared algorithms as well as theirmetrological applicability are drawn. KEY WORDS Deconvolution algorithms Instrumental analysis Spectrometry 1. INTRODUCTIONDeconvolution algorithms have'recently found numerous applications in various domains ofmeasurement science and instrumentation technology. They have been successfully applied forsolving the following problems of measurand reconstruction: I(i) digital correction of dynamic errors arising in measurement channels2-s(ii) interpretation of seismic signals6-9(iii) improving resolution in spectrometry and chromatography 10-15(iv) reconstruction of thermo kinetics in dynamic caloritnetry 16(v) measuring thickness of multilayer structures 17(vi) improving quality of images in defectoscopy, electron microscopy, computerizedtomography, etc.IS-22In all above-listed cases the problem of measurand reconstruction may be formulated asfollows:(i) Both the measurand x(t) and the raw result of measurement y(t) are real-valuedfunctions of a scalar variable t, most often the modelling time.

30 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: There are two kinds of tutorial articles: those that provide a primer on an established topic and those that let us in on the ground floor of something of emerging importance.
Abstract: There are two kinds of tutorial articles: those that provide a primer on an established topic and those that let us in on the ground floor of something of emerging importance. The first type of tutorial can have a noted expert who has been gracious (and brave) enough to write a field guide about a particular topic. The other sort of tutorial typically involves researchers who have each been laboring on a topic for some years. Both sorts of tutorial articles are very much desired. But we, as an editorial board for both Systems and Transactions, know that there has been no logical place for them in the AESS until this series was started several years ago. With these tutorials, we hope to continue to give them a home, a welcome, and provide a service to our membership. We do not intend to publish tutorials on a regular basis, but we hope to deliver them once or twice per year. We need and welcome good, useful tutorial articles (both kinds) in relevant AESS areas. If you, the reader, can offer a topic of interest and an author to write about it, please contact us. Self-nominations are welcome, and even more ideal is a suggestion of an article that the editor(s) can solicit. All articles will be reviewed in detail. Criteria on which they will be judged include their clarity of presentation, relevance, and likely audience, and, of course, their correctness and scientific merit. As to the mathematical level, the articles in this issue are a good guide: in each case the author has striven to explain complicated topics in simple-well, tutorial-terms. There should be no (or very little) novel material: the home for archival science is the Transactions Magazine, and submissions that need to be properly peer reviewed would be rerouted there. Likewise, articles that are interesting and descriptive, but lack significant tutorial content, ought more properly be submitted to the Systems Magazine.

955 citations

Journal ArticleDOI
29 Aug 1992

553 citations

Journal ArticleDOI
TL;DR: About ninety empirical functions for the representation of chromatographic peaks have been collected and tabulated and the table, based on almost 200 references, reports for every function the most used name, the most convenient equation, the applications and the mathematical properties.

215 citations