scispace - formally typeset
Search or ask a question

Showing papers by "Technion – Israel Institute of Technology published in 2006"


Journal ArticleDOI
TL;DR: A novel algorithm for adapting dictionaries in order to achieve sparse signal representations, the K-SVD algorithm, an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data.
Abstract: In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method-the K-SVD algorithm-generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data

8,905 citations


Journal ArticleDOI
TL;DR: This work addresses the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image, and uses the K-SVD algorithm to obtain a dictionary that describes the image content effectively.
Abstract: We address the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image. The approach taken is based on sparse and redundant representations over trained dictionaries. Using the K-SVD algorithm, we obtain a dictionary that describes the image content effectively. Two training options are considered: using the corrupted image itself, or training on a corpus of high-quality image database. Since the K-SVD is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm. This leads to a state-of-the-art denoising performance, equivalent and sometimes surpassing recently published leading alternative denoising methods

5,493 citations


Journal ArticleDOI
TL;DR: A new notion of an enhanced broadcast channel is introduced and is used jointly with the entropy power inequality, to show that a superposition of Gaussian codes is optimal for the degraded vector broadcast channel and that DPC is ideal for the nondegraded case.
Abstract: The Gaussian multiple-input multiple-output (MIMO) broadcast channel (BC) is considered. The dirty-paper coding (DPC) rate region is shown to coincide with the capacity region. To that end, a new notion of an enhanced broadcast channel is introduced and is used jointly with the entropy power inequality, to show that a superposition of Gaussian codes is optimal for the degraded vector broadcast channel and that DPC is optimal for the nondegraded case. Furthermore, the capacity region is characterized under a wide range of input constraints, accounting, as special cases, for the total power and the per-antenna power constraints

1,899 citations


Proceedings ArticleDOI
17 Jun 2006
TL;DR: A novel algorithm for tracking an object in a video sequence represented by multiple image fragments or patches, which is able to handle partial occlusions or pose change and overcomes several difficulties which cannot be handled by traditional histogram-based algorithms.
Abstract: We present a novel algorithm (which we call "Frag- Track") for tracking an object in a video sequence. The template object is represented by multiple image fragments or patches. The patches are arbitrary and are not based on an object model (in contrast with traditional use of modelbased parts e.g. limbs and torso in human tracking). Every patch votes on the possible positions and scales of the object in the current frame, by comparing its histogram with the corresponding image patch histogram. We then minimize a robust statistic in order to combine the vote maps of the multiple patches. A key tool enabling the application of our algorithm to tracking is the integral histogram data structure [18]. Its use allows to extract histograms of multiple rectangular regions in the image in a very efficient manner. Our algorithm overcomes several difficulties which cannot be handled by traditional histogram-based algorithms [8, 6]. First, by robustly combining multiple patch votes, we are able to handle partial occlusions or pose change. Second, the geometric relations between the template patches allow us to take into account the spatial distribution of the pixel intensities - information which is lost in traditional histogram-based algorithms. Third, as noted by [18], tracking large targets has the same computational cost as tracking small targets. We present extensive experimental results on challenging sequences, which demonstrate the robust tracking achieved by our algorithm (even with the use of only gray-scale (noncolor) information).

1,522 citations


Journal ArticleDOI
TL;DR: This work evaluates claims and some counter-claims made about the physiological importance of these enzymes and the potential of their inhibitors in the light of what the authors know, and still have to learn, of the structure, function and genetics of the monoamine oxidases and the disparate actions of their inhibitor.
Abstract: Monoamine oxidase inhibitors were among the first antidepressants to be discovered and have long been used as such. It now seems that many of these agents might have therapeutic value in several common neurodegenerative conditions, independently of their inhibition of monoamine oxidase activity. However, many claims and some counter-claims have been made about the physiological importance of these enzymes and the potential of their inhibitors. We evaluate these arguments in the light of what we know, and still have to learn, of the structure, function and genetics of the monoamine oxidases and the disparate actions of their inhibitors.

1,173 citations


Journal ArticleDOI
TL;DR: The proposed precoder design is general, and as a special case, it solves the transmit rank-one beamforming problem and can significantly outperform existing linear precoders.
Abstract: In this paper, the problem of designing linear precoders for fixed multiple-input-multiple-output (MIMO) receivers is considered. Two different design criteria are considered. In the first, the transmitted power is minimized subject to signal-to-interference-plus-noise-ratio (SINR) constraints. In the second, the worst case SINR is maximized subject to a power constraint. It is shown that both problems can be solved using standard conic optimization packages. In addition, conditions are developed for the optimal precoder for both of these problems, and two simple fixed-point iterations are proposed to find the solutions that satisfy these conditions. The relation to the well-known uplink-downlink duality in the context of joint transmit beamforming and power control is also explored. The proposed precoder design is general, and as a special case, it solves the transmit rank-one beamforming problem. Simulation results in a multiuser system show that the resulting precoders can significantly outperform existing linear precoders.

987 citations


Journal ArticleDOI
TL;DR: Tomographic analysis demonstrates that the polarization state of pairs of photons emitted from a biexciton decay cascade becomes entangled when spectral filtering is applied and that the remanent information in the quantum dot degrees of freedom is negligible.
Abstract: Tomographic analysis demonstrates that the polarization state of pairs of photons emitted from a biexciton decay cascade becomes entangled when spectral filtering is applied. The measured density matrix of the photon pair satisfies the Peres criterion for entanglement by more than 3 standard deviations of the experimental uncertainty and violates Bell's inequality. We show that the spectral filtering erases the "which path" information contained in the photons' color and that the remanent information in the quantum dot degrees of freedom is negligible.

779 citations


Journal ArticleDOI
16 Nov 2006-Blood
TL;DR: Matched related allogeneic transplantations for ALL in first complete remission provide the most potent antileukemic therapy and considerable survival benefit for standard-risk patients, but the transplantation-related mortality for high-risk older patients was unacceptably high and abrogated the reduction in relapse risk.

698 citations


Journal ArticleDOI
TL;DR: The generalized multidimensional scaling algorithm is introduced, a computationally efficient continuous optimization algorithm for finding the least distortion embedding of one surface into another that allows for both full and partial surface matching.
Abstract: An efficient algorithm for isometry-invariant matching of surfaces is presented. The key idea is computing the minimum-distortion mapping between two surfaces. For this purpose, we introduce the generalized multidimensional scaling, a computationally efficient continuous optimization algorithm for finding the least distortion embedding of one surface into another. The generalized multidimensional scaling algorithm allows for both full and partial surface matching. As an example, it is applied to the problem of expression-invariant three-dimensional face recognition.

583 citations


Journal ArticleDOI
TL;DR: An alternative NICS-based method is introduced that is applied to several (4n + 2)- and 4n pi-electron systems (molecules and ions) in the singlet and triplet electronic states, including some of the problematic systems mentioned above.
Abstract: Nucleus-independent chemical shifts (NICS) have been used extensively for the identification of aromaticity properties of molecules, ions, intermediates, and transition states since their introduction in 1996 by Schleyer et al. Initially, probes (bq's) were placed at the centers of systems (NICS(0)) and later, 1A above the molecular planes (NICS(1)). However, contradicting assignments of aromaticity by NICS and other methods were found for some systems. In this article, an alternative NICS-based method is introduced. The method is based on scanning NICS values over a distance and separating them into in-plane and out-of plane contributions. The shapes of the plots of the chemical shifts and their components as a function of the distance of the NICS probe (bq) from the molecular plane give a clear indication of diamagnetic and paramagnetic ring currents. This method is applied to several (4n + 2)- and 4n π-electron systems (molecules and ions) in the singlet and triplet electronic states, including some of...

579 citations


Journal ArticleDOI
TL;DR: It is shown that SFT (SINGLE-FLOWER TRUSS), the tomato ortholog of FLOWERING LOCUS T (FT), induces flowering in day-neutral tomato and tobacco plants and is encoded by SFT.
Abstract: The systemic model for floral induction, dubbed florigen, was conceived in photoperiod-sensitive plants but implies, in its ultimate form, a graft-transmissible signal that, although activated by different stimuli in different flowering systems, is common to all plants. We show that SFT (SINGLE-FLOWER TRUSS), the tomato ortholog of FLOWERING LOCUS T (FT), induces flowering in day-neutral tomato and tobacco plants and is encoded by SFT. sft tomato mutant plants are late-flowering, with altered architecture and flower morphology. SFT-dependent graft-transmissible signals complement all developmental defects in sft plants and substitute for long-day stimuli in Arabidopsis, short-day stimuli in Maryland Mammoth tobacco, and light-dose requirements in tomato uniflora mutant plants. The absence of donor SFT RNA from flowering receptor shoots and the localization of the protein in leaf nuclei implicate florigen-like messages in tomato as a downstream pathway triggered by cell-autonomous SFT RNA transcripts. Flowering in tomato is synonymous with termination of the shoot apical meristems, and systemic SFT messages attenuate the growth of apical meristems before and independent of floral production. Floral enhancement by systemic SFT signals is therefore one pleiotropic effect of FT orthologs.

Proceedings ArticleDOI
17 Jun 2006
TL;DR: An approach for blindly recovering the parameter needed for separating the airlight from the measurements, thus recovering contrast, with neither user interaction nor existence of the sky in the frame is derived, which eases the interaction and conditions needed for image dehazing.
Abstract: Outdoor imaging is plagued by poor visibility conditions due to atmospheric scattering, particularly in haze. A major problem is spatially-varying reduction of contrast by stray radiance (airlight), which is scattered by the haze particles towards the camera. Recent computer vision methods have shown that images can be compensated for haze, and even yield a depth map of the scene. A key step in such a scene recovery is subtraction of the airlight. In particular, this can be achieved by analyzing polarization-filtered images. However, the recovery requires parameters of the airlight. These parameters were estimated in past studies by measuring pixels in sky areas. This paper derives an approach for blindly recovering the parameter needed for separating the airlight from the measurements, thus recovering contrast, with neither user interaction nor existence of the sky in the frame. This eases the interaction and conditions needed for image dehazing, which also requires compensation for attenuation. The approach has proved successful in experiments, some of which are shown here.

Journal ArticleDOI
TL;DR: A solution to the problem of reflection and refraction of a polarized Gaussian beam on the interface between two transparent media and the transverse shifts of the beams' centers of gravity are calculated.
Abstract: We present a solution to the problem of reflection and refraction of a polarized Gaussian beam on the interface between two transparent media. The transverse shifts of the beams' centers of gravity are calculated. They always satisfy the total angular momentum conservation law for beams, but, in general, do not satisfy the conservation laws for individual photons as a consequence of the lack of the "which path" information in a two-channel wave scattering. The field structure for the reflected and refracted beams is analyzed. In the scattering of a linearly polarized beam, photons of opposite helicities are accumulated at the opposite edges of the beam: this is the spin Hall effect for photons, which can be registered in the cross-polarized component of the scattered beam.

Journal ArticleDOI
TL;DR: These studies support the hypothesis that SSRs, by virtue of their special mutational and functional qualities, have a major role in generating the genetic variation underlying adaptive evolution.

01 Jan 2006
TL;DR: It is shown that clustering techniques can improve the user experience of current tagging services and thus the success of collaborative tagging services.
Abstract: In this paper we discuss the use of clustering techniques to enhance the user experience and thus the success of collaborative tagging services. We show that clustering techniques can improve the user experience of current tagging services. We first describe current limitations of tagging services, second, we give an overview of existing approaches. We then describe the algorithms we used for tag clustering and give experimental results. Finally, we explore the use of several techniques to identify semantically related tags.

Journal ArticleDOI
S. Schael1, R. Barate, R. Bruneliere, I. De Bonis  +1279 moreInstitutions (141)
TL;DR: In this paper, four LEP collaborations, ALEPH, DELPHI, L3 and OPAL, have searched for the neutral Higgs bosons which are predicted by the minimal supersymmetric standard model (MSSM).
Abstract: The four LEP collaborations, ALEPH, DELPHI, L3 and OPAL, have searched for the neutral Higgs bosons which are predicted by the Minimal Supersymmetric standard model (MSSM). The data of the four collaborations are statistically combined and examined for their consistency with the background hypothesis and with a possible Higgs boson signal. The combined LEP data show no significant excess of events which would indicate the production of Higgs bosons. The search results are used to set upper bounds on the cross-sections of various Higgs-like event topologies. The results are interpreted within the MSSM in a number of “benchmark” models, including CP-conserving and CP-violating scenarios. These interpretations lead in all cases to large exclusions in the MSSM parameter space. Absolute limits are set on the parameter cosβ and, in some scenarios, on the masses of neutral Higgs bosons.

Proceedings Article
16 Jul 2006
TL;DR: It is proposed to enrich document representation through automatic use of a vast compendium of human knowledge--an encyclopedia, and empirical results confirm that this knowledge-intensive representation brings text categorization to a qualitatively new level of performance across a diverse collection of datasets.
Abstract: When humans approach the task of text categorization, they interpret the specific wording of the document in the much larger context of their background knowledge and experience. On the other hand, state-of-the-art information retrieval systems are quite brittle--they traditionally represent documents as bags of words, and are restricted to learning from individual word occurrences in the (necessarily limited) training set. For instance, given the sentence "Wal-Mart supply chain goes real time", how can a text categorization system know that Wal-Mart manages its stock with RFID technology? And having read that "Ciprofloxacin belongs to the quinolones group", how on earth can a machine know that the drug mentioned is an antibiotic produced by Bayer? In this paper we present algorithms that can do just that. We propose to enrich document representation through automatic use of a vast compendium of human knowledge--an encyclopedia. We apply machine learning techniques to Wikipedia, the largest encyclopedia to date, which surpasses in scope many conventional encyclopedias and provides a cornucopia of world knowledge. Each Wikipedia article represents a concept, and documents to be categorized are represented in the rich feature space of words and relevant Wikipedia concepts. Empirical results confirm that this knowledge-intensive representation brings text categorization to a qualitatively new level of performance across a diverse collection of datasets.

Journal ArticleDOI
TL;DR: A fast and robust hybrid method of super-resolution and demosaicing, based on a maximum a posteriori estimation technique by minimizing a multiterm cost function is proposed.
Abstract: In the last two decades, two related categories of problems have been studied independently in image restoration literature: super-resolution and demosaicing. A closer look at these problems reveals the relation between them, and, as conventional color digital cameras suffer from both low-spatial resolution and color-filtering, it is reasonable to address them in a unified context. In this paper, we propose a fast and robust hybrid method of super-resolution and demosaicing, based on a maximum a posteriori estimation technique by minimizing a multiterm cost function. The L/sub 1/ norm is used for measuring the difference between the projected estimate of the high-resolution image and each low-resolution image, removing outliers in the data and errors due to possibly inaccurate motion estimation. Bilateral regularization is used for spatially regularizing the luminance component, resulting in sharp edges and forcing interpolation along the edges and not across them. Simultaneously, Tikhonov regularization is used to smooth the chrominance components. Finally, an additional regularization term is used to force similar edge location and orientation in different color channels. We show that the minimization of the total cost function is relatively easy and fast. Experimental results on synthetic and real data sets confirm the effectiveness of our method.

Journal ArticleDOI
TL;DR: In this article, the fundamental underlying theory that is necessary in order to properly measure and interpret contact angles is discussed, emphasizing recent developments, and the practical implications of these theoretical aspects are presented.
Abstract: The measurement and interpretation of contact angles deceptively appear to be simple. This paper attempts to summarize the pitfalls in the field, and how to avoid them. First, the fundamental underlying theory that is necessary in order to properly measure and interpret contact angles is discussed, emphasizing recent developments. Then, the practical implications of these theoretical aspects are presented. In addition, the discussion highlights the missing pieces of the picture that need to be completed through future research.

Proceedings ArticleDOI
17 Jun 2006
TL;DR: This work addresses the image denoising problem, where zeromean white and homogeneous Gaussian additive noise should be removed from a given image, by defining a global image prior that forces sparsity over patches in every location in the image.
Abstract: We address the image denoising problem, where zeromean white and homogeneous Gaussian additive noise should be removed from a given image. The approach taken is based on sparse and redundant representations over a trained dictionary. The proposed algorithm denoises the image, while simultaneously trainining a dictionary on its (corrupted) content using the K-SVD algorithm. As the dictionary training algorithm is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm, with state-of-the-art performance, equivalent and sometimes surpassing recently published leading alternative denoising methods.

Journal ArticleDOI
TL;DR: The extent to which design and engineering knowledge can be practically embedded in production software for building information modeling (BIM) is explored, and a building object behavior (BOB) description notation and method is developed as a shorthand protocol for designing, validating and sharing the design intent of parametric objects.

Journal ArticleDOI
TL;DR: The activity of early-to-fire neurons reliably forecasts an upcoming network spike and provides means for expedited propagation between assemblies, and theory predicts that scale-free topology allows for synchronization time that does not increase markedly with network size; this prediction is supported.
Abstract: Cognitive processes depend on synchronization and propagation of electrical activity within and between neuronal assemblies. In vivo measurements show that the size of individual assemblies depends on their function and varies considerably, but the timescale of assembly activation is in the range of 0.1–0.2 s and is primarily independent of assembly size. Here we use an in vitro experimental model of cortical assemblies to characterize the process underlying the timescale of synchronization, its relationship to the effective topology of connectivity within an assembly, and its impact on propagation of activity within and between assemblies. We show that the basic mode of assembly activation, “network spike,” is a threshold-governed, synchronized population event of 0.1–0.2 s duration and follows the logistics of neuronal recruitment in an effectively scale-free connected network. Accordingly, the sequence of neuronal activation within a network spike is nonrandom and hierarchical; a small subset of neurons is consistently recruited tens of milliseconds before others. Theory predicts that scale-free topology allows for synchronization time that does not increase markedly with network size; our experiments with networks of different densities support this prediction. The activity of early-to-fire neurons reliably forecasts an upcoming network spike and provides means for expedited propagation between assemblies. We demonstrate this capacity by observing the dynamics of two artificially coupled assemblies in vitro , using neuronal activity of one as a trigger for electrical stimulation of the other.

Proceedings ArticleDOI
21 Oct 2006
TL;DR: A measure on graphs, the minrank, is identified, which exactly characterizes the minimum length of linear and certain types of nonlinear INDEX codes and for natural classes of side information graphs, including directed acyclic graphs, perfect graphs, odd holes, and odd anti-holes, minrank is the optimal length of arbitrary INDex codes.
Abstract: Motivated by a problem of transmitting data over broadcast channels (Birk and Kol, INFOCOM 1998), we study the following coding problem: a sender communicates with n receivers R_1, . . . , R_n. He holds an input x \in {0, 1}^n and wishes to broadcast a single message so that each receiver R_i can recover the bit x_i. Each R_i has prior side information about x, induced by a directed graph G on n nodes; R_i knows the bits of x in the positions {j | (i, j) is an edge of G}. We call encoding schemes that achieve this goal INDEX codes for {0, 1}^n with side information graph G. In this paper we identify a measure on graphs, the minrank, which we conjecture to exactly characterize the minimum length of INDEX codes. We resolve the conjecture for certain natural classes of graphs. For arbitrary graphs, we show that the minrank bound is tight for both linear codes and certain classes of non-linear codes. For the general problem, we obtain a (weaker) lower bound that the length of an INDEX code for any graph G is at least the size of the maximum acyclic induced subgraph of G.

Journal ArticleDOI
TL;DR: High-resolution crystal structures of sequence-specific complexes between the core domain of human p53 and different DNA half-sites are presented and establish a structural framework for understanding the mechanisms of specificity, affinity, and cooperativity of DNA binding by p53.

Journal ArticleDOI
TL;DR: The Informed-Source Coding On Demand approach for efficiently supplying nonidentical data from a central server to multiple caching clients over a broadcast channel is presented and k-partial cliques in a directed graph are defined and cast ISCOD in terms of partial-clique covers.
Abstract: The Informed-Source Coding On Demand (ISCOD) approach for efficiently supplying nonidentical data from a central server to multiple caching clients over a broadcast channel is presented. The key idea underlying ISCOD is the joint exploitation of the data blocks already cached by each client, the server's full knowledge of client-cache contents and client requests, and the fact that each client only needs to be able to derive the blocks requested by it rather than all the blocks ever transmitted or even the union of the blocks requested by the different clients. We present two-phase ISCOD algorithms: the server first creates ad-hoc error-correction sets based on its knowledge of client states; next, it uses erasure-correction codes to construct the data for transmission. Each client uses its cached data and the received supplemental data to derive its requested blocks. The result is up to a several-fold reduction in the amount of transmitted supplemental data. Also, we define k-partial cliques in a directed graph and cast ISCOD in terms of partial-clique covers.

Journal ArticleDOI
23 Mar 2006-Nature
TL;DR: The central role of HSR1 during heat shock implies that targeting this RNA could serve as a new therapeutic model for cancer, inflammation and other conditions associated with HSF1 deregulation, as well as impair the heat-shock response in vivo, rendering cells thermosensitive.
Abstract: The heat-shock transcription factor 1 (HSF1) has an important role in the heat-shock response in vertebrates by inducing the expression of heat-shock proteins (HSPs) and other cytoprotective proteins1. HSF1 is present in unstressed cells in an inactive monomeric form and becomes activated by heat and other stress stimuli. HSF1 activation involves trimerization and acquisition of a site-specific DNA-binding activity2,3, which is negatively regulated by interaction with certain HSPs4,5,6. Here we show that HSF1 activation by heat shock is an active process that is mediated by a ribonucleoprotein complex containing translation elongation factor eEF1A and a previously unknown non-coding RNA that we term HSR1 (heat shock RNA-1). HSR1 is constitutively expressed in human and rodent cells and its homologues are functionally interchangeable. Both HSR1 and eEF1A are required for HSF1 activation in vitro; antisense oligonucleotides or short interfering (si)RNA against HSR1 impair the heat-shock response in vivo, rendering cells thermosensitive. The central role of HSR1 during heat shock implies that targeting this RNA could serve as a new therapeutic model for cancer, inflammation and other conditions associated with HSF1 deregulation.

Journal ArticleDOI
TL;DR: It is shown that for any function ω(n) that tends to infinity, H_{1) is the first homology group of Y with mod 2 coefficients.
Abstract: Let Δn−1 denote the (n − 1)-dimensional simplex. Let Y be a random 2-dimensional subcomplex of Δn−1 obtained by starting with the full 1-dimensional skeleton of Δn−1 and then adding each 2−simplex independently with probability p. Let $$H_{1} {\left( {Y;{\Bbb F}_{2} } \right)}$$ denote the first homology group of Y with mod 2 coefficients. It is shown that for any function ω(n) that tends to infinity $${\mathop {\lim }\limits_{n \to \infty } }{\kern 1pt} {\kern 1pt} {\text{Prob}}{\left[ {H_{1} {\left( {Y;{\Bbb F}_{2} } \right)} = 0} \right]} = \left\{ {\begin{array}{*{20}c}{{0p = \frac{{2\log n - \omega {\left( n \right)}}}{n}}} \\ {{1p = \frac{{2\log n + \omega {\left( n \right)}}}{n}}} \\ \end{array} } \right.$$

Journal ArticleDOI
20 Oct 2006-Wear
TL;DR: In this paper, the effect of partial laser surface texturing (LST) on friction reduction in piston rings was evaluated on a reciprocating test rig with actual piston rings and cylinder liner segments.

Journal ArticleDOI
TL;DR: In this paper, the attitude-quaternion and gyro random drifts from vector measurements are estimated using a Kalman filter, and the covariance matrices of the system state-dependent noises are derived.
Abstract: This paper presents a novel Kalman filter (KF) for estimating the attitude-quaternion as well as gyro random drifts from vector measurements. Employing a special manipulation on the measurement equation results in a linear pseudo-measurement equation whose error is state-dependent. Because the quaternion kinematics equation is linear, the combination of the two yields a linear KF that eliminates the usual linearization procedure and is less sensitive to initial estimation errors. General accurate expressions for the covariance matrices of the system state-dependent noises are developed. In addition, an analysis shows how to compute these covariance matrices efficiently. An adaptive version of the filter is also developed to handle modeling errors of the dynamic system noise statistics. Monte-Carlo simulations are carried out that demonstrate the efficiency of both versions of the filter. In the particular case of high initial estimation errors, a typical extended Kalman filter (EKF) fails to converge whereas the proposed filter succeeds.

Journal ArticleDOI
TL;DR: The present research postulates the requirements for a conceptual framework and theoretical basis of digital design; reviews the recent theoretical and historical background; and defines a generic schema of design characteristics through which the paradigmatic classes ofdigital design are formulated.