scispace - formally typeset
Search or ask a question
Author

Azriel Rosenfeld

Other affiliations: Meiji University
Bio: Azriel Rosenfeld is an academic researcher from University of Maryland, College Park. The author has contributed to research in topics: Image processing & Feature detection (computer vision). The author has an hindex of 94, co-authored 595 publications receiving 49426 citations. Previous affiliations of Azriel Rosenfeld include Meiji University.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper , the authors used machine learning to reveal and quantify risk factors with the goal of improving patient safety and quality of care, using data from 9,234 observations on safety standards and 101 root-cause analyses from actual, major Never Events including wrong site surgery and retained foreign item, and three random forest supervised machine learning models to identify risk factors.
Abstract: A surgical "Never Event" is a preventable error occurring immediately before, during or immediately following surgery. Various factors contribute to the occurrence of major Never Events, but little is known about their quantified risk in relation to a surgery's characteristics. Our study uses machine learning to reveal and quantify risk factors with the goal of improving patient safety and quality of care.We used data from 9,234 observations on safety standards and 101 root-cause analyses from actual, major "Never Events" including wrong site surgery and retained foreign item, and three random forest supervised machine learning models to identify risk factors. Using a standard 10-cross validation technique, we evaluated the models' metrics, measuring their impact on the occurrence of the two types of Never Events through Gini impurity.We identified 24 contributing factors in six surgical departments: two had an impact of > 900% in Urology, Orthopedics, and General Surgery; six had an impact of 0-900% in Gynecology, Urology, and Cardiology; and 17 had an impact of < 0%. Combining factors revealed 15-20 pairs with an increased probability in five departments: Gynecology, 875-1900%; Urology, 1900-2600%; Cardiology, 833-1500%; Orthopedics,1825-4225%; and General Surgery, 2720-13,600%. Five factors affected wrong site surgery's occurrence (-60.96 to 503.92%) and five affected retained foreign body (-74.65 to 151.43%): two nurses (66.26-87.92%), surgery length < 1 h (85.56-122.91%), and surgery length 1-2 h (-60.96 to 85.56%).Using machine learning, we could quantify the risk factors' potential impact on wrong site surgeries and retained foreign items in relation to a surgery's characteristics, suggesting that safety standards should be adjusted to surgery's characteristics based on risk assessment in each operating room. .MOH 032-2019.
ReportDOI
01 Jul 1981
TL;DR: In this paper, the authors reviewed the results on DIGITAL StRAIGHTness and CONVEXITY and showed that the criticalness for a set of Lattice points to be the DIGITIZation of a convex set, or for a digital ARC to be a straight line segment, depends on the definition of the point points used.
Abstract: : RECENT RESULTS ON DIGITAL STRAIGHTNESS AND CONVEXITY ARE REVIEWED, AND IT IS SHOWN THAT THE CRITERIA FOR A SET OF LATTICE POINTS TO BE THE DIGITIZATION OF A CONVEX SET, OR FOR A DIGITAL ARC TO BE THE DIGITIZATION OF A STRAIGHT LINE SEGMENT, DEPEND CRITICALLY ON THE DEFINITION OF DIGITIZATION THAT IS USED. (Author)
Book ChapterDOI
01 Jan 1972
TL;DR: In this article, bounds on the complexity of grammars are discussed in terms of the length of the longest member in any rewriting rule, the size of the vocabulary, and the number of rules.
Abstract: Publisher Summary This chapter discusses bounds on the complexity of grammars. One can study various measures of the complexity of a grammar. The chapter discusses three such measures-rule length, vocabulary size, and number of rules. A difficult problem is that of finding a grammar that has minimum complexity in regard to two or more of the measures. Any language has grammars for which each of these measures individually has a low value. The chapter discusses terms of three measures of the complexity of a grammar: λ, the length of the longest member in any rewriting rule; v, the size of the vocabulary; and ρ, the number of rules. Any language has grammars in which any one of these measures has a low value, but this can in general be achieved only at the expense of increasing one or both of the other two measures. Bounds on v as a function of λ are obtained in a number of cases; in most of these cases, the grammars are required to be context free.
Book ChapterDOI
01 Jan 2023
TL;DR: In this article , the authors explore the possible association between the number of categories a journal is classified to and its associated rankings using the two most widely used indexing systems - Web Of Science and Scopus.
Abstract: Journal classification systems use a variety of (partially) overlapping and non-exhaustive subject categories which results in many journals being classified into more than a single subject category. Given a subject category, respective journals are often ranked based on a common metric such as the Journal Impact Factor or SCImago Journal Rank. However, given a specific journal, it might be ranked very differently across its associated subject categories. In this study, we set to explore the possible association between the number of categories a journal is classified to and its associated rankings using the two most widely used indexing systems - Web Of Science and Scopus. Using known distance measures, our results show that a higher number of classified categories per journal is associated with an increased range and variance of the associated rankings within them. Findings and possible implications are discussed.
Journal ArticleDOI
TL;DR: This paper conducted an international and cross-disciplinary survey answered by 752 academics from 41 fields of research and 93 countries that statistically well-represent the overall academic workforce, and found that authorship credit conflicts arise very early in one's academic career, even at the level of Master and Ph.D., and become increasingly common over time.
Abstract: Collaboration among scholars has emerged as a significant characteristic of contemporary science. As a result, the number of authors listed in publications continues to rise steadily. Unfortunately, determining the authors to be included in the byline and their respective order entails multiple difficulties which often lead to conflicts. Despite the large volume of literature about conflicts in academia, it remains unclear how exactly it is distributed over the main socio-demographic properties, as well as the different types of interactions academics experience. To address this gap, we conducted an international and cross-disciplinary survey answered by 752 academics from 41 fields of research and 93 countries that statistically well-represent the overall academic workforce. Our findings are concerning and suggest that authorship credit conflicts arise very early in one's academic career, even at the level of Master and Ph.D., and become increasingly common over time.

Cited by
More filters
Journal ArticleDOI
TL;DR: There is a natural uncertainty principle between detection and localization performance, which are the two main goals, and with this principle a single operator shape is derived which is optimal at any scale.
Abstract: This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge.

28,073 citations

Journal ArticleDOI
01 Nov 1973
TL;DR: These results indicate that the easily computable textural features based on gray-tone spatial dependancies probably have a general applicability for a wide variety of image-classification applications.
Abstract: Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image be a photomicrograph, an aerial photograph, or a satellite image. This paper describes some easily computable textural features based on gray-tone spatial dependancies, and illustrates their application in category-identification tasks of three different kinds of image data: photomicrographs of five kinds of sandstones, 1:20 000 panchromatic aerial photographs of eight land-use categories, and Earth Resources Technology Satellite (ERTS) multispecial imagery containing seven land-use categories. We use two kinds of decision rules: one for which the decision regions are convex polyhedra (a piecewise linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89 percent for the photomicrographs, 82 percent for the aerial photographic imagery, and 83 percent for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

20,442 citations

Journal ArticleDOI
TL;DR: In this paper, it is shown that the difference of information between the approximation of a signal at the resolutions 2/sup j+1/ and 2 /sup j/ (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L/sup 2/(R/sup n/), the vector space of measurable, square-integrable n-dimensional functions.
Abstract: Multiresolution representations are effective for analyzing the information content of images. The properties of the operator which approximates a signal at a given resolution were studied. It is shown that the difference of information between the approximation of a signal at the resolutions 2/sup j+1/ and 2/sup j/ (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L/sup 2/(R/sup n/), the vector space of measurable, square-integrable n-dimensional functions. In L/sup 2/(R), a wavelet orthonormal basis is a family of functions which is built by dilating and translating a unique function psi (x). This decomposition defines an orthogonal multiresolution representation called a wavelet representation. It is computed with a pyramidal algorithm based on convolutions with quadrature mirror filters. Wavelet representation lies between the spatial and Fourier domains. For images, the wavelet representation differentiates several spatial orientations. The application of this representation to data compression in image coding, texture discrimination and fractal analysis is discussed. >

20,028 citations

Journal ArticleDOI
TL;DR: The analogy between images and statistical mechanics systems is made and the analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations, creating a highly parallel ``relaxation'' algorithm for MAP estimation.
Abstract: We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system. The assignment of an energy function in the physical system determines its Gibbs distribution. Because of the Gibbs distribution, Markov random field (MRF) equivalence, this assignment also determines an MRF image model. The energy function is a more convenient and natural mechanism for embodying picture attributes than are the local characteristics of the MRF. For a range of degradation mechanisms, including blurring, nonlinear deformations, and multiplicative or additive noise, the posterior distribution is an MRF with a structure akin to the image model. By the analogy, the posterior distribution defines another (imaginary) physical system. Gradual temperature reduction in the physical system isolates low energy states (``annealing''), or what is the same thing, the most probable states under the Gibbs distribution. The analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations. The result is a highly parallel ``relaxation'' algorithm for MAP estimation. We establish convergence properties of the algorithm and we experiment with some simple pictures, for which good restorations are obtained at low signal-to-noise ratios.

18,761 citations