scispace - formally typeset
Search or ask a question

Showing papers on "Kernel (image processing) published in 1995"


Journal ArticleDOI
TL;DR: The time-frequency representation developed in the present paper, based on a signal-dependent radially Gaussian kernel that adapts over time, surmounts difficulties and often provides much better performance.
Abstract: Time-frequency representations with fixed windows or kernels figure prominently in many applications, but perform well only for limited classes of signals. Representations with signal-dependent kernels can overcome this limitation. However, while they often perform well, most existing schemes are block-oriented techniques unsuitable for on-line implementation or for tracking signal components with characteristics that change with time. The time-frequency representation developed in the present paper, based on a signal-dependent radially Gaussian kernel that adapts over time, surmounts these difficulties. The method employs a short-time ambiguity function both for kernel optimization and as an intermediate step in computing constant-time slices of the representation. Careful algorithm design provides reasonably efficient computation and allows on-line implementation. Certain enhancements, such as cone-kernel constraints and approximate retention of marginals, are easily incorporated with little additional computation. While somewhat more expensive than fixed kernel representations, this new technique often provides much better performance. Several examples illustrate its behavior on synthetic and real-world signals. >

357 citations


Journal ArticleDOI
TL;DR: A technique is presented that allows: 1) computing the best approximation of a given family using linear combinations of a small number of 'basis' functions; and 2) describing all finite-dimensional families, i.e., the families of filters for which a finite dimensional representation is possible with no error.
Abstract: Early vision algorithms often have a first stage of linear-filtering that 'extracts' from the image information at multiple scales of resolution and multiple orientations. A common difficulty in the design and implementation of such schemes is that one feels compelled to discretize coarsely the space of scales and orientations in order to reduce computation and storage costs. A technique is presented that allows: 1) computing the best approximation of a given family using linear combinations of a small number of 'basis' functions; and 2) describing all finite-dimensional families, i.e., the families of filters for which a finite dimensional representation is possible with no error. The technique is based on singular value decomposition and may be applied to generating filters in arbitrary dimensions and subject to arbitrary deformations. The relevant functional analysis results are reviewed and precise conditions for the decomposition to be feasible are stated. Experimental results are presented that demonstrate the applicability of the technique to generating multiorientation multi-scale 2D edge-detection kernels. The implementation issues are also discussed. >

252 citations


Journal ArticleDOI
Ping Wah Wong1
TL;DR: It is shown that the kernel estimation algorithm combined with MAP projection provide the same performance in inverse halftoning compared to the case where the error diffusion kernel is known.
Abstract: Two different approaches in the inverse halftoning of error-diffused images are considered. The first approach uses linear filtering and statistical smoothing that reconstructs a gray-scale image from a given error-diffused image. The second approach can be viewed as a projection operation, where one assumes the error diffusion kernel is known, and finds a gray-scale image that will be halftoned into the same binary image. Two projection algorithms, viz., minimum mean square error (MMSE) projection and maximum a posteriori probability (MAP) projection, that differ on the way an inverse quantization step is performed, are developed. Among the filtering and the two projection algorithms, MAP projection provides the best performance for inverse halftoning. Using techniques from adaptive signal processing, we suggest a method for estimating the error diffusion kernel from the given halftone. This means that the projection algorithms can be applied in the inverse halftoning of any error-diffused image without requiring any a priori information on the error diffusion kernel. It is shown that the kernel estimation algorithm combined with MAP projection provide the same performance in inverse halftoning compared to the case where the error diffusion kernel is known. >

146 citations


Journal ArticleDOI
TL;DR: In this paper, a blind deconvolution method was proposed to identify and remove the convolutional distortion in order to reconstruct the tissue response, thus enhancing the diagnostic quality of the ultrasonic image.
Abstract: We address the problem of improving the spatial resolution of ulrasound images through blind deconvolution. The ultrasound image formation process in the RF domain can be expressed as a spatio-temporal convolution between the tissue response and the ultrasonic system response, plus additive noise. Convolutional components of the dispersive attenuation and aberrations introduced by propagating through the object being imaged are also incorporated in the ultrasonic system response. Our goal is to identify and remove the convolutional distortion in order to reconstruct the tissue response, thus enhancing the diagnostic quality of the ultrasonic image. Under the assumption of an independent, identically distributed, zero-mean, non-Gaussian tissue response, we were able to estimate distortion kernels using bicepstrum operations on RF data. Separate 1D distortion kernels were estimated corresponding to axial and lateral image lines and used in the deconvolution process. The estimated axial kernels showed similarities to the experimentally measured pulse-echo wavelet of the imaging system. Deconvolution results from B-scan images obtained with clinical imaging equipment showed a 2.5-5.2 times gain in lateral resolution, where the definition of the resolution has been based on the width of the autocovariance function of the image. The gain in axial resolution was found to be between 1.5 and 1.9.

135 citations


01 Jan 1995
TL;DR: In this article, the authors present a performance prediction model that allows the highest speedup to be predicted from the knowledge of the ratio of the computation time to the communication time, which is the main limiting factor in our programming environment.
Abstract: Concurrent computing on networks of distributed computers has gained tremendous attention and popularity in recent years. In this paper, we use this computing environment for the development of efficient parallel image convolution applications for grey-level images and binary images. Significant speedup was achieved using different image sizes, kernel sizes, and number of workstations. We also present a performance prediction model that agrees well with our experimental measurements and allows the highest speedup to be predicted from the knowledge of the ratio of the computation time to the communication time. The main limiting factor in our programming environment is the bandwidth of the network. Thus, it seems with emerging high-speed networks such as ATM networks, parallel computing on networks of distributed computers can be a very attractive alternative to traditional parallel computing on SIMD and MIMD multiprocessors in executing computationally intensive applications in general and image processing applications in particular.

123 citations


Journal ArticleDOI
11 Jan 1995
TL;DR: It seems with emerging high-speed networks such as ATM networks, parallel computing on networks of distributed computers can be a very attractive alternative to traditional parallel Computing on SIMD and MIMD multiprocessors in executing computationally intensive applications in general and image processing applications in particular.
Abstract: Concurrent computing on networks of distributed computers has gained tremendous attention and popularity in recent years. In this paper, we use this computing environment for the development of efficient parallel image convolution applications for grey-level images and binary images. Significant speedup was achieved using different image sizes, kernel sizes, and number of workstations. We also present a performance prediction model that agrees well with our experimental measurements and allows the highest speedup to be predicted from the knowledge of the ratio of the computation time to the communication time. The main limiting factor in our programming environment is the bandwidth of the network. Thus, it seems with emerging high-speed networks such as ATM networks, parallel computing on networks of distributed computers can be a very attractive alternative to traditional parallel computing on SIMD and MIMD multiprocessors in executing computationally intensive applications in general and image processing applications in particular.

117 citations


Patent
15 Jun 1995
TL;DR: In this paper, a system for detecting movement of a writing implement relative to a writing surface to determine the path of the writing implement is presented, where the pen tip is determined by manually, by looking for a predetermined pen tip shape, or by finding a position of maximum motion in the image.
Abstract: A system for detecting movement of a writing implement relative to a writing surface to determine the path of the writing implement. The writing implement tip is determined within the image and used to form a kernel. The determination is made either manually, by looking for a predetermined pen tip shape, or by looking for a position of maximum motion in the image. That kernel is tracked from frame to frame to define the path of the writing implement. The tracking is accomplished by correlating the kernel to the image: either to the whole image, to a portion of the image near the last position of the kernel, or to a portion of the image predicted by a prediction filter.

101 citations


Patent
Leonardo Cohen1
23 Aug 1995
TL;DR: In this article, the export record of an operating system kernel employing dynamically-linked loading modules is thunked so as to globally and forcibly redirect service requests from afterwards loaded modules to subclassing routines instead of to original servicing routines of the kernel.
Abstract: The export record of an operating system kernel employing dynamically-linked loading modules (e.g., portable-executable modules) is thunked so as to globally and forcibly redirect service requests from afterwards loaded modules to subclassing routines instead of to original servicing routines of the kernel. The base location of the kernel is determined from an Image_Base entry of its disk-image. An offset storing position in the export record is overwritten with a value equal to the value of the address of the subclassing routine minus the kernel's base address. Use of the thunked export record is forced even for ‘bound’ external references by altering the time stamp in the kernel's export record to a nonmatching value.

85 citations


Proceedings ArticleDOI
Gilles Bertrand1, Zouina Aktouf1
04 Jan 1995
TL;DR: In this paper, a three-dimensional parallel thinning algorithm for cubic grids with the 26-connectivity is presented, where the cubic grid is divided into 8 subfields which are successively activated.
Abstract: A three-dimensional parallel thinning algorithm is presented. This algorithm works in cubic grid with the 26-connectivity. It is based upon two topological numbers introduced elsewhere. These numbers allow us to check if a point is simple or not and to detect end points. The strategy which is used for removing points in parallel without altering the topology of the image is a strategy based upon subfields: the cubic grid is divided into 8 subfields which are successively activated. The use of 4 subfields is also considered. One major interest of the subfield approach is that it is `complete,' i.e., all simple points which are not considered as skeletal points are removed. The proposed algorithm allows us to get a curve skeleton, a surface skeleton as well as a topological kernel of the objects. Furthermore it is possible to implement it by using only Boolean conditions.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

69 citations


Journal ArticleDOI
TL;DR: A dose-point kernel convolution technique that provides a three-dimensional (3D) distribution of absorbed dose from a 3D distribution of the radionuclide 131I was validated.
Abstract: The objective of this study was to validate a dose-point kernel convolution technique that provides a three-dimensional (3D) distribution of absorbed dose from a 3D distribution of the radionuclide 131I. A dose-point kernel for the penetrating radiations was calculated by a Monte Carlo simulation and cast in a 3D rectangular matrix. This matrix was convolved with the 3D activity map furnished by quantitative single-photon-emission computed tomography (SPECT) to provide a 3D distribution of absorbed dose. The convolution calculation was performed using a 3D fast Fourier transform (FFT) technique, which takes less than 40 s for a 128 x 128 x 16 matrix on an Intel 486 DX2 (66 MHz) personal computer. The calculated photon absorbed dose was compared with values measured by thermoluminescent dosimeters (TLDS) inserted along the diameter of a 22 cm diameter annular source of 131I. The mean and standard deviation of the percentage difference between the measurements and the calculations were equal to -1% and 3.6%, respectively. This convolution method was also used to calculate the 3D dose distribution in an Alderson abdominal phantom containing a liver, a spleen, and a spherical tumour volume loaded with various concentrations of 131I. By averaging the dose calculated throughout the liver, spleen, and tumour the dose-point kernel approach was compared with values derived using the MIRD formalism, and found to agree to better than 15%.

68 citations


Journal ArticleDOI
TL;DR: Comparative simulations show that metabolic lag can be used to consistently describe observations and that a convolution form can effectively represent microbial lag for this system.
Abstract: A model is introduced for microbial kinetics in porous media that includes effects of transients in the metabolic activity of subsurface microorganisms. The model represents the microbial metabolic activity as a functional of the history of aqueous phase substrates; this dependence is represented as a temporally nonlocal convolution integral. Conceptually, this convolution represents the activity of a microbial component as a fraction of its maximum activity, and it is conventionally known as the metabolic potential. The metabolic potential is used to scale the kinetic expressions to account for the metabolic state of the organisms and allows the representation of delayed response in the microbial kinetic equations. Calculation of the convolution requires the definition of a memory (or kernel) function that upon integration over the substrate history represents the microbial metabolic response. A simple piecewise-linear metabolic potential functional is developed here; however, the approach can be generalized to fit the observed behavior of specific systems of interest. The convolution that results from the general form of this model is nonlinear; these nonlinearities are handled by using two separate memory functions and by scaling the domains of the convolution integrals. The model is applied to describe the aerobic degradation of benzene in saturated porous media. Comparative simulations show that metabolic lag can be used to consistently describe observations and that a convolution form can effectively represent microbial lag for this system. Simulations also show that disregarding metabolic lag when it exists can lead to overestimation of the amount of substrate degraded.

Journal ArticleDOI
TL;DR: The design of small convolution kernels for the restoration and reconstruction of Advanced Very High Resolution Radiometer images are described, which maximizes image fidelity subject to explicit constraints on the spatial support and resolution of the kernel.
Abstract: Describes the design of small convolution kernels for the restoration and reconstruction of Advanced Very High Resolution Radiometer (AVHRR) images. The kernels are small enough to be implemented efficiently by convolution, yet effectively correct degradations and increase apparent resolution. The kernel derivation is based on a comprehensive, end-to-end system model that accounts for scene statistics, image acquisition blur, sampling effects, sensor noise, and postfilter reconstruction. The design maximizes image fidelity subject to explicit constraints on the spatial support and resolution of the kernel. The kernels can be designed with finer resolution than the image to perform partial reconstruction for geometric correction and other remapping operations. Experiments demonstrate that small kernels yield fidelity comparable to optimal unconstrained filters with less computation. >

Patent
25 Oct 1995
TL;DR: In this paper, a convolution filter stores pixel values and associated depth values (Z) with a filter kernel being selected from a look-up table in dependence of the depth of the centre pixel (Zc) in relation to a specified focus depth (P).
Abstract: A post-processing method and apparatus to produce focus/defocus effects in computer generated images of three dimensional objects. A convolution filter stores pixel values (V) and associated depth values (Z) with a filter kernel being selected from a look-up table in dependence of the depth of the centre pixel (Zc) in relation to a specified focus depth (P). To minimize spurious effects where filter kernels overlap objects at different depths in the image, an inhibition function stage varies the amount by which each pixel contributes to the kernel in dependence on that pixel's depth value (Z) and the centre pixel and focus depth values (Zc and P). Inhibition profiles over a range of contributing and centre pixel values are provided.

Proceedings Article
01 Jan 1995
TL;DR: The architecture of the FTS real-time executive is described, which is the framework for IRCAM's real time applications and includes both classical real- time distributed kernel features and a message-based object system.
Abstract: This paper describes the architecture of the FTS real-time executive, which is the framework for IRCAM's real time applications. FTS includes both classical real-time distributed kernel features and a message-based object system. FTS is a portable and configurable system, which exists now on several platforms which are described.

Journal ArticleDOI
TL;DR: With this method, it is ensured that total primary energy deposited due to primary photon interactions in a unit mass at a point is equal to Kc at that point.
Abstract: In photon beam convolution, the distribution of energy deposition about a primary photon interaction site due to charged particles set in motion at that site is represented by the primary kernel. Energy deposited due to scattered photons, bremsstrahlung, and annihilation photons is represented by the scatter kernel. As the energy deposited in each kernel voxel is normalized to the energy imparted at the interaction site, it is known as a fractional energy distribution. In terma-based convolution, where kernels are normalized to total energy imparted at the interaction site and are convolved with the terma in the dose calculation process, the sum of fractional energies contained in the primary kernel is equal to the ratio of collision kerma (Kc) to terma (T) corresponding to the energy spectrum used to generate the kernel. Since the ratio of collision kerma to terma increases with depth as the beam hardens, the integral fractional energy in a primary kernel formed for the spectrum at the surface is less than the ratio Kc/T at depth. This causes primary dose to be increasingly underestimated with depth and scatter dose to be increasingly overestimated. Single polyenergetic convolution (using polyenergetic primary and scatter kernels formed using a polyenergetic primary photon spectrum) is thus not as rigorous as if a separate convolution is performed for each energy component. The ratio of true primary dose to single polyenergetic primary dose increases almost linearly with depth and is almost equal to the Kc/T ratio. Primary and scatter dose are calculated correctly if a single polyenergetic convolution is performed in terms of Kc (for primary) and T-Kc (for scatter), where the kernels are weighted sums of monoenergetic kernels normalized to Kc and T-Kc. With this method, it is ensured that total primary energy deposited due to primary photon interactions in a unit mass at a point is equal to Kc at that point.

Proceedings ArticleDOI
15 May 1995
TL;DR: Three different implementations of a traditional real-time kernel in hardware are presented, all of which improved performance and determinism by several orders of magnitude when compared with software-based real time kernels.
Abstract: The article presents three different implementations of a traditional real-time kernel in hardware. All approaches improved performance and determinism by several orders of magnitude when compared with software-based real time kernels. The first implementation provides an integrated deterministic CPU and a deterministic and high performance multitasking real time kernel in hardware. The second implementation provides a deterministic and high performance standalone multitasking real time kernel in hardware and the last implementation provides a deterministic and high performance real time kernel for homogeneous and heterogeneous multiprocessor real-time systems.

Journal ArticleDOI
TL;DR: A class of highly regular fast cyclic convolution algorithms, based on block pseudocirculant matrices, is obtained.
Abstract: Pseudocirculant matrices have been studied in the past in the context of FIR filtering, block filtering, polyphase networks and others. For completeness, their relation to cyclic convolution, stride permutations, circulant matrices, and to certain permutations of the Fourier matrix is explicitly established in this work. Within this process, a class of highly regular fast cyclic convolution algorithms, based on block pseudocirculant matrices, is obtained. >



Proceedings ArticleDOI
Jiyoon Chung1, Hyun S. Yang1
21 May 1995
TL;DR: A tracker that provides real-time visual feedback using on-board low-cost processors based on the two stage visual tracking method (TSVTM) which consists of a real- time kernel, image saver, database, and vision module.
Abstract: We describe a tracker that provides real-time visual feedback using on-board low-cost processors. The proposed tracker is based on the two stage visual tracking method (TSVTM) which consists of a real-time kernel, image saver, database, and vision module. The real-time kernel based on the earliest-deadline-first scheduling policy provides the capability of processing tasks with time constraints within the deadline. Image saver takes the responsibility for keeping all the incoming images until they can be processed. The database keeps both the estimated and the predictive location, velocity, intensity, etc. of each region that makes up the target. The vision module consists of two modules: the first stage vision module (FSVM), and the second-stage vision module (SSVM). The FSVM processes the whole image to initially recognize targets using the sophisticated vision algorithms while the SSVM can easily find and track them using the focus-of-attention strategy based on Kalman filter since the SSVM knows the approximated location and useful features of the targets. Combining the above four mechanisms effectively, TSVTM can track targets every one-thirtieth of a second.

Journal ArticleDOI
TL;DR: In this paper, an extremal principle is formulated for the linear viscoelastic problem with general viscous kernel, which is an extension of the classical total potential energy principle of the linear elasticity.
Abstract: An extremal principle is formulated for the linear viscoelastic problem with general viscous kernel. This is an extension of the classical total potential energy principle of the linear elasticity. Then a discretized formulation in space and time is shown for frame structures, using finite element technique. Several numerical examples, for two different kinds of viscoelastic materials, testify the accuracy and reliability of the proposed method. The matrix conditioning indexes obtained are compared with those achieved by applying the least square method.

Journal ArticleDOI
TL;DR: In this article, the authors deduce a generalized Green's formula that acquires an additional bilinear form in u and v and is determined by the coefficients in the expansion of solutions near singularities of the boundary.
Abstract: The usual Green's formula connected with the operator of a boundary-value problem fails when both of the solutions u and v that occur in it have singularities that are too strong at a conic point or at an edge on the boundary of the domain. We deduce a generalized Green's formula that acquires an additional bilinear form in u and v and is determined by the coefficients in the expansion of solutions near singularities of the boundary. We obtain improved asymptotic representations of solutions in a neighborhood of an edge of positive dimension, which together with the generalized Green's formula makes it possible, for example, to describe the infinite-dimensional kernel of the operator of an elliptic problem in a domain with edge. Bibliography: 14 titles.

Journal ArticleDOI
TL;DR: The use of BIE as a noise reduction technique is reported in digital radiographs of anthropomorphic chest phantoms by improving contrast while constraining noise and varying the magnitude of the kernel used, differing amounts of noise reduction and contrast enhancement can be obtained.
Abstract: Previously, it has been shown that Bayesian image estimation (BIE) can reduce the effects of scattered radiation and improve contrast‐to‐noise ratios (CNR) in digital radiographs of anthropomorphic chest phantoms by improving contrast while constraining noise. Here, the use of BIE as a noise reduction technique is reported. An anthropomorphic phantom was imaged with a previously calibrated photostimulable phosphor system using standard bedside chest radiography protocols. The Bayesian technique was then used to process this image. BIE incorporates a radial exponential convolution scatter model with two adjustable parameters. In previous reports, these parameters were optimized to reduce the residual fraction of scattered radiation in the processed image. Here, the parameters were adjusted to evaluate the potential of BIE to reduce image noise. While the full width at half maximum of the scatter model was held constant, the magnitude was varied. Evaluation was based on residual scatter fractions and CNR. The magnitude of the kernel in the scatter model was varied from 0.0 to 2.5 in steps of 0.5. Previously, it was found that an ‘‘ideal’’ scatter kernel magnitude of 2.33 provided a minimum residual scatter fraction. This magnitude corresponds to the average scatter‐to‐primary ratio in the chest radiograph. As the magnitude was increased, the residual scatter fraction decreased and the CNR increased in both the lungs and the mediastinum. However, as the magnitude was decreased, the percent noise also decreased; therefore, a lower magnitude kernel reduces noise. By varying the magnitude of the kernel used, differing amounts of noise reduction and contrast enhancement can be obtained. These results demonstrate that Bayesian image estimation can be used to both increase contrast and decrease noise in digital chest radiography.

Journal ArticleDOI
01 Sep 1995
TL;DR: In this paper, a generalization of sampling series is introduced by considering expansions in terms of scaled translates of a basic function with coefficients given by sampled values of the convolution of a function f with a kernel of Fejer's type.
Abstract: A generalization of sampling series is introduced by considering expansions in terms of scaled translates of a basic function with coefficients given by sampled values of the convolution of a function f with a kernel of Fejer’s type. Such expressions have been used in finite element approximations, sampling theory and, more recently, in wavelet analysis. This article is concerned with the convergence of these series for functions f that exhibit some kind of local singular behavior in time or frequency domains. Pointwise convergence at discontinuity points and Gibbs phenomena are analysed. The convergence in the $H^s $-norm is also investigated. Special attention is focused on multiresolution analysis approximations and examples using the Daubechies scaling functions are presented.

Journal ArticleDOI
TL;DR: An incoherent image processor that uses orthogonally oriented one-dimensional acoustooptic cells to implement dynamic, arbitrary bipolar point-spread functions (PSF's) and initial experimental results are presented that demonstrate the realization of an arbitrary nonseparable PSF, image convolution with a bipolarPSF, two-dimensional image correlation, and an increased processor field of view.
Abstract: We describe an incoherent image processor that uses orthogonally oriented one-dimensional acousto-optic cells to implement dynamic, arbitrary bipolar point-spread functions (PSF’s). Arbitrary PSF’s are implemented as a linear superposition in time of separable PSF’s. The use of incoherent illumination increases the input field of view over that provided by coherent illumination, and implementation of the PSF by a pupil-plane filter yields a simple, compact single-lens imaging system. The acousto-optic cells offer a faster PSF update rate than that of conventional spatial light modulators, which is a critical issue for the implementation of a bipolar PSF as a subtraction between its positive and rectified negative parts. Initial experimental results are presented that demonstrate the realization of an arbitrary nonseparable PSF, image convolution with a bipolar PSF, two-dimensional image correlation, and an increased processor field of view.

Journal ArticleDOI
TL;DR: In this paper, necessary and sufficient conditions for the existence of canonical generalized factorization are given for a class of 2 × 2 matrix functions of Daniele-Khrapkov type.

Proceedings ArticleDOI
09 May 1995
TL;DR: A convolution neural network was used for classification of masses and normal tissue on mammograms and results indicate that using texture-images improves the classification accuracy.
Abstract: A convolution neural network (CNN) was used for classification of masses and normal tissue on mammograms. A generalized CNN was developed that uses multiple images derived from a single region of interest (ROI) as the input. The CNN input images were obtained from the ROIs using (i) averaging and subsampling; and (ii) texture feature extraction methods on smaller sub-regions inside the ROI. In (ii), features computed over different sub-regions were arranged as texture-images, and subsequently used as inputs to the CNN. The results indicate that using texture-images improves the classification accuracy.

Proceedings ArticleDOI
23 Oct 1995
TL;DR: A new quadratic error criterion is introduced which keeps into account the inherent system aliasing, and a multirate implementation of deformable kernels is proposed, capable to further reduce the computational weight.
Abstract: In computer vision and increasingly, in rendering and image processing, it is useful to filter images with continuous rotated and scaled families of filters. For practical implementations, one can think of using a discrete family of filters, and then to interpolate from their outputs to produce the desired filtered version of the image. We propose a multirate implementation of deformable kernels, capable to further reduce the computational weight. The "basis" filters are applied to the different levels of a pyramidal decomposition. The new system is not shift-invariant-it suffers from "aliasing". We introduce a new quadratic error criterion which keeps into account the inherent system aliasing. By using hypermatrix and Kronecker algebra, we are able to cast the global optimization task into a multilinear problem. An iterative procedure ("pseudo-SVD") is used to minimize the overall quadratic approximation error.

Journal Article
TL;DR: A two-stage embodiment of the process in which immediately preceding the application of the foam a similar foam is applied and removed together with a part of the substances to be washed or rinsed from the textile material.
Abstract: A process for the washing or rinsing of dyed or printed, continuously advancing width of textile materials, whereby a foam is uniformly applied to one side of the textile material. The foam is produced from a liquid containing one or several surface active agents, together with a compound soluble in the liquid, and having no affinity to the fibers of the textile material to be treated, but displaying a high adsorption capacity for the substances to be washed from the textile material. Immediately following the application of the foam, the widths of the textile material are exposed to a steam treatment and then rinsed. A two-stage embodiment of the process in which immediately preceding the application of the foam a similar foam is applied and removed together with a part of the substances to be washed or rinsed from the textile material. The second stage, which includes the steam treatment, then merely removes the residual substances to be washed or rinsed out.

Proceedings ArticleDOI
23 Aug 1995
TL;DR: Kaleido is an experimental approach to designing an integrated multimedia system that is an on-going project and the current snap shot of the architecture and implementation is presented.
Abstract: Emerging ’multimedia technologies have the potential to revolutionize the way humans organize, communicate and consume information. However, the full benefits of this technology have yet to reach the vast majority of computer users. One reason for this is that high-bandwidth networks are not widely deployed or accessibIe. But beyond the bandwidth banier, there also exists a mulfimedia compuring barrier. The architectures of present day computers are not significantly different from their earlier generation counterparts: fundamentally oriented towards &fa processing -they are simply much faster and better at it. Consequently, these systems (hardware and system software alike) are ill-equipped to cater to the special requirements imposed by multimedia. Most commonly used approach is to attach ‘multimedia’ devices such as cameras, sound-cards, CD-ROM drives as U0 peripherals to an existing computer system, and hope that a ‘fast enough’ CPU will run a few targeted applications reasonably well. It is possible, using this approach, to build some highly optimized (possibly through the use of custom hardware peripherals) stand alone multimedia applications that perfonn tolerably. However, such systems do not provide a general-purpose infra-structure to support the integration of multimedia capability into an arbitrary user application. To enable this, it is necessary to incorporate support for multimedia into the hardware and software fabric of the system, not just as ‘add-ons’. Kaleido is an experimental approach to designing such an integrated multimedia system. It is an on-going project and in this paper we present the current snap shot of our architecture and implementation.