scispace - formally typeset
Search or ask a question

Showing papers by "Technion – Israel Institute of Technology published in 1997"


Journal ArticleDOI
TL;DR: In this article, a geodesic approach based on active contours evolving in time according to intrinsic geometric measures of the image is presented. But this approach is not suitable for 3D object segmentation.
Abstract: A novel scheme for the detection of object boundaries is presented. The technique is based on active contours evolving in time according to intrinsic geometric measures of the image. The evolving contours naturally split and merge, allowing the simultaneous detection of several objects and both interior and exterior boundaries. The proposed approach is based on the relation between active contours and the computation of geodesics or minimal distance curves. The minimal distance curve lays in a Riemannian space whose metric is defined by the image content. This geodesic approach for object segmentation allows to connect classical “snakes” based on energy minimization and geometric active contours based on the theory of curve evolution. Previous models of geometric active contours are improved, allowing stable boundary detection when their gradients suffer from large variations, including gaps. Formal results concerning existence, uniqueness, stability, and correctness of the evolution are presented as well. The scheme was implemented using an efficient algorithm for curve evolution. Experimental results of applying the scheme to real images including objects with holes and medical data imagery demonstrate its power. The results may be extended to 3D object segmentation as well.

4,967 citations


Journal ArticleDOI
TL;DR: Tree Augmented Naive Bayes (TAN) is single out, which outperforms naive Bayes, yet at the same time maintains the computational simplicity and robustness that characterize naive Baye.
Abstract: Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state-of-the-art classifiers such as C4.5. This fact raises the question of whether a classifier with less restrictive assumptions can perform even better. In this paper we evaluate approaches for inducing classifiers from data, based on the theory of learning Bayesian networks. These networks are factored representations of probability distributions that generalize the naive Bayesian classifier and explicitly represent statements about independence. Among these approaches we single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same time maintains the computational simplicity (no search involved) and robustness that characterize naive Bayes. We experimentally tested these approaches, using problems from the University of California at Irvine repository, and compared them to C4.5, naive Bayes, and wrapper methods for feature selection.

4,775 citations


Journal ArticleDOI
TL;DR: In this article, a unified mathematical theory is presented that takes advantage of the disparity of the length scales and is based on the asymptotic procedure of reduction of the full set of governing equations and boundary conditions to a simplified, highly nonlinear, evolution equation or to a set of equations.
Abstract: Macroscopic thin liquid films are entities that are important in biophysics, physics, and engineering, as well as in natural settings. They can be composed of common liquids such as water or oil, rheologically complex materials such as polymers solutions or melts, or complex mixtures of phases or components. When the films are subjected to the action of various mechanical, thermal, or structural factors, they display interesting dynamic phenomena such as wave propagation, wave steepening, and development of chaotic responses. Such films can display rupture phenomena creating holes, spreading of fronts, and the development of fingers. In this review a unified mathematical theory is presented that takes advantage of the disparity of the length scales and is based on the asymptotic procedure of reduction of the full set of governing equations and boundary conditions to a simplified, highly nonlinear, evolution equation or to a set of equations. As a result of this long-wave theory, a mathematical system is obtained that does not have the mathematical complexity of the original free-boundary problem but does preserve many of the important features of its physics. The basics of the long-wave theory are explained. If, in addition, the Reynolds number of the flow is not too large, the analogy with Reynolds's theory of lubrication can be drawn. A general nonlinear evolution equation or equations are then derived and various particular cases are considered. Each case contains a discussion of the linear stability properties of the base-state solutions and of the nonlinear spatiotemporal evolution of the interface (and other scalar variables, such as temperature or solute concentration). The cases reducing to a single highly nonlinear evolution equation are first examined. These include: (a) films with constant interfacial shear stress and constant surface tension, (b) films with constant surface tension and gravity only, (c) films with van der Waals (long-range molecular) forces and constant surface tension only, (d) films with thermocapillarity, surface tension, and body force only, (e) films with temperature-dependent physical properties, (f) evaporating/condensing films, (g) films on a thick substrate, (h) films on a horizontal cylinder, and (i) films on a rotating disc. The dynamics of the films with a spatial dependence of the base-state solution are then studied. These include the examples of nonuniform temperature or heat flux at liquid-solid boundaries. Problems which reduce to a set of nonlinear evolution equations are considered next. Those include (a) the dynamics of free liquid films, (b) bounded films with interfacial viscosity, and (c) dynamics of soluble and insoluble surfactants in bounded and free films. The spreading of drops on a solid surface and moving contact lines, including effects of heat and mass transport and van der Waals attractions, are then addressed. Several related topics such as falling films and sheets and Hele-Shaw flows are also briefly discussed. The results discussed give motivation for the development of careful experiments which can be used to test the theories and exhibit new phenomena.

2,689 citations


Book ChapterDOI
17 Aug 1997
TL;DR: This work states that this attack is applicable only to public key cryptosystems such as RSA, and not to secret key algorithms such as the Data Encryption Standard (DES).
Abstract: In September 1996 Boneh, Demillo, and Lipton from Bellcore announced a new type of cryptanalytic attack which exploits computational errors to find cryptographic keys. Their attack is based on algebraic properties of modular arithmetic, and thus it is applicable only to public key cryptosystems such as RSA, and not to secret key algorithms such as the Data Encryption Standard (DES).

1,662 citations


Journal ArticleDOI
TL;DR: A hybrid method combining the simplicity of theML and the incorporation of nonellipsoid constraints is presented, giving improved restoration performance, compared with the ML and the POCS approaches.
Abstract: The three main tools in the single image restoration theory are the maximum likelihood (ML) estimator, the maximum a posteriori probability (MAP) estimator, and the set theoretic approach using projection onto convex sets (POCS). This paper utilizes the above known tools to propose a unified methodology toward the more complicated problem of superresolution restoration. In the superresolution restoration problem, an improved resolution image is restored from several geometrically warped, blurred, noisy and downsampled measured images. The superresolution restoration problem is modeled and analyzed from the ML, the MAP, and POCS points of view, yielding a generalization of the known superresolution restoration methods. The proposed restoration approach is general but assumes explicit knowledge of the linear space- and time-variant blur, the (additive Gaussian) noise, the different measured resolutions, and the (smooth) motion characteristics. A hybrid method combining the simplicity of the ML and the incorporation of nonellipsoid constraints is presented, giving improved restoration performance, compared with the ML and the POCS approaches. The hybrid method is shown to converge to the unique optimal solution of a new definition of the optimization problem. Superresolution restoration from motionless measurements is also discussed. Simulations demonstrate the power of the proposed methodology.

1,174 citations


Proceedings ArticleDOI
19 Oct 1997
TL;DR: Based on the quadratic residuosity assumption, a single database, computationally private information retrieval scheme with O(n/sup /spl epsiv//) communication complexity for any /spl Epsiv/>0.0 is presented.
Abstract: We establish the following, quite unexpected, result: replication of data for the computational private information retrieval problem is not necessary. More specifically, based on the quadratic residuosity assumption, we present a single database, computationally private information retrieval scheme with O(n/sup /spl epsiv//) communication complexity for any /spl epsiv/>0.

1,074 citations


Journal ArticleDOI
TL;DR: The main shortcomings of conventional mapping-namely, prolonged x-ray exposure, low spatial resolution, and the inability to accurately navigate to a predefined site-can all be overcome with this new method.
Abstract: Background Cardiac mapping is essential for understanding the mechanisms of arrhythmias and for directing curative procedures. A major limitation of the current methods is the inability to accurately relate local electrograms to their spatial orientation. The objective of this study was to present and test the accuracy of a new method for nonfluoroscopic, catheter-based, endocardial mapping. Methods and Results The method is based on using a new locatable catheter connected to an endocardial mapping and navigating system. The system uses magnetic technology to accurately determine the location and orientation of the catheter and simultaneously records the local electrogram from its tip. By sampling a plurality of endocardial sites, the system reconstructs the three-dimensional geometry of the chamber, with the electrophysiological information color-coded and superimposed on the anatomy. The accuracy of the system was tested in both in vitro and in vivo studies and was found to be highly reproducible (SD, ...

847 citations


Journal ArticleDOI
TL;DR: Particular emphasis will be placed on estimation of rare events and on integration of the associated performance function into stochastic optimization programs.

710 citations


Journal ArticleDOI
TL;DR: It is shown that ENaC is a short‐lived protein that is ubiquitinated in vivo on the α and γ (but not β) subunits, and a paradigm for ubiquitination‐mediated regulation of ion channels is proposed.
Abstract: The epithelial Na+ channel (ENaC), composed of three subunits (alpha beta gamma), plays a critical role in salt and fluid homeostasis. Abnormalities in channel opening and numbers have been linked to several genetic disorders, including cystic fibrosis, pseudohypoaldosteronism type I and Liddle syndrome. We have recently identified the ubiquitin-protein ligase Nedd4 as an interacting protein of ENaC. Here we show that ENaC is a short-lived protein (t1/2 approximately 1 h) that is ubiquitinated in vivo on the alpha and gamma (but not beta) subunits. Mutation of a cluster of Lys residues (to Arg) at the N-terminus of gamma ENaC leads to both inhibition of ubiquitination and increased channel activity, an effect augmented by N-terminal Lys to Arg mutations in alpha ENaC, but not in beta ENaC. This elevated channel activity is caused by an increase in the number of channels present at the plasma membrane; it represents increases in both cell-surface retention or recycling of ENaC and incorporation of new channels at the plasma membrane, as determined by Brefeldin A treatment. In addition, we find that the rapid turnover of the total pool of cellular ENaC is attenuated by inhibitors of both the proteasome and the lysosomal/endosomal degradation systems, and propose that whereas the unassembled subunits are degraded by the proteasome, the assembled alpha beta gamma ENaC complex is targeted for lysosomal degradation. Our results suggest that ENaC function is regulated by ubiquitination, and propose a paradigm for ubiquitination-mediated regulation of ion channels.

702 citations


Journal ArticleDOI
TL;DR: The results suggest that the Gly-Ala repeat may affect MHC I restricted responses by inhibiting antigen processing via the ubiquitin/proteasome pathway.
Abstract: The Epstein–Barr virus (EBV) encoded nuclear antigen (EBNA) 1 is expressed in latently infected B lymphocytes that persist for life in healthy virus carriers and is the only viral protein regularly detected in all EBV associated malignancies. The Gly-Ala repeat domain of EBNA1 was shown to inhibit in cis the presentation of major histocompatibility complex (MHC) class I restricted cytotoxic T cell epitopes from EBNA4. It appears that the majority of antigens presented via the MHC I pathway are subject to ATP-dependent ubiquitination and degradation by the proteasome. We have investigated the influence of the repeat on this process by comparing the degradation of EBNA1, EBNA4, and Gly-Ala containing EBNA4 chimeras in a cell-free system. EBNA4 was efficiently degraded in an ATP/ubiquitin/proteasome-dependent fashion whereas EBNA1 was resistant to degradation. Processing of EBNA1 was restored by deletion of the Gly-Ala domain whereas insertion of Gly-Ala repeats of various lengths and in different positions prevented the degradation of EBNA4 without appreciable effect on ubiquitination. Inhibition was also achieved by insertion of a Pro-Ala coding sequence. The results suggest that the repeat may affect MHC I restricted responses by inhibiting antigen processing via the ubiquitin/proteasome pathway. The presence of regularly interspersed Ala residues appears to be important for the effect.

555 citations


Journal ArticleDOI
TL;DR: Recombinant VEGF145 induced the proliferation of vascular endothelial cells and promoted angiogenesis in vivo and seems to possess a unique combination of biological properties distinct from those of previously characterized V EGF species.

Journal ArticleDOI
01 Apr 1997
TL;DR: It is constructively proved that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines, raising the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power.
Abstract: Recently, fully connected recurrent neural networks have been proven to be computationally rich-at least as powerful as Turing machines. This work focuses on another network which is popular in control applications and has been found to be very effective at learning a variety of problems. These networks are based upon Nonlinear AutoRegressive models with eXogenous Inputs (NARX models), and are therefore called NARX networks. As opposed to other recurrent networks, NARX networks have a limited feedback which comes only from the output neuron rather than from hidden states. They are formalized by y(t)=/spl Psi/(u(t-n/sub u/), ..., u(t-1), u(t), y(t-n/sub y/), ..., y(t-1)) where u(t) and y(t) represent input and output of the network at time t, n/sub u/ and n/sub y/ are the input and output order, and the function /spl Psi/ is the mapping performed by a Multilayer Perceptron. We constructively prove that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines. We conclude that in theory one can use the NARX models, rather than conventional recurrent networks without any computational loss even though their feedback is limited. Furthermore, these results raise the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power.

Journal ArticleDOI
TL;DR: In this review the various test algorithms are screened, and their relative advantages and disadvantages are discussed.
Abstract: Quantitative sensory testing has become commonplace in clinical neurophysiology units. Measurement of the thermal and vibratory senses provides an estimate on function of sensory small and large fibers, respectively. Being psychophysical parameters, sensory threshold values are not objective, and various test algorithms have been developed aiming at optimized results. In this review the various test algorithms are screened, and their relative advantages and disadvantages are discussed. Considerations of quality control are reviewed, and the main fields of clinical application are described.

Journal ArticleDOI
TL;DR: It is proposed that repeat-number variation in SSRs (simple sequence repeats) is a major source of quantitative mutation, which has broad implications for understanding molecular processes of evolutionary adaptation, including the evolutionary control of the mutation process itself.

Journal ArticleDOI
TL;DR: This article extends Abstract Interpretation to the analysis of both existential and universal reactive properties, as expressible in the modal -calculus, and shows how abstract models may be constructed by symbolic execution of programs.
Abstract: The advent of ever more complex reactive systems in increasingly critical areas calls for the development of automated verication techniques. Model checking is one such technique, which has proven quite successful. However, the state-explosion problem remains a major stumbling block. Recent experience indicates that solutions are to be found in the application of techniques for property-preserving abstraction and successive approximation of models. Most such applications have so far been based solely on the property-preserving characteristics of simulation relations. A major drawback of all these results is that they do not oer a satisfactory formalization of the notion of precision of abstractions. The theory of Abstract Interpretation oers a framework for the denition and justication of property-preserving abstractions. Furthermore, it provides a method for the eective computation of abstract models directly from the text of a program, thereby avoiding the need for intermediate storage of a full-blown model. Finally, it formalizes the notion of optimality, while allowing to trade precision for speed by computing suboptimal approximations. For a long time, applications of Abstract Interpretation have mainly focused on the analysis of universal safety properties, i.e., properties that hold in all states along every possible execution path. In this article, we extend Abstract Interpretation to the analysis of both existential and universal reactive properties, as expressible in the modal -calculus .I t is shown how abstract models may be constructed by symbolic execution of programs. A notion of approximation between abstract models is dened while conditions are given under which optimal models can be constructed. Examples are given to illustrate this. We indicate conditions under which also falsehood of formulae is preserved. Finally, we compare our approach to those based on simulation relations.

Journal ArticleDOI
TL;DR: The SCIM when administered by a multidisciplinary team, may be a useful instrument for assessing changes in everyday performance in patients with spinal cord lesion and is more sensitive to changes in function in spinal Cord lesion patients than the FIM.
Abstract: The Spinal Cord Independence Measure (SCIM) is a new disability scale developed specifically for patients with spinal cord lesions in order to make the functional assessments of patients with paraplegia or tetraplegia more sensitive to changes. The SCIM includes the following areas of function: self-care (subscore (0-20), respiration and sphincter management (0-40) and mobility (0-40). Each area is scored according to its proportional weight in these patients' general activity. The final score ranges from 0 to 100. This study was performed to evaluate the reliability of the SCIM and its sensitivity to functional changes in spinal cord lesion patients compared with the Functional Independence Measure (FIM). Thirty patients were included. Scores were recorded one week after admission and thereafter every month during hospitalization. Each area of function was assessed by a pair of staff members from the relevant discipline. The comparison of scores between each pair of rates revealed a remarkable consistency (r = 0.91-0.99; P < 0.0001; slope approximately 1; constant approximately 0). The total SCIM score (mean = 51, SD = 21) was lower than the total FIM score (mean = 87, SD = 23) owing to the difference in scale range structure and the relatively high cognitive scores of our patients; however, a relationship was noted between the scores of both scales (r = 0.85, P < 0.01). The SCIM was more sensitive than the FIM to changes in function of spinal cord lesion patients: the SCIM detected all the functional changes detected by the FIM total scoring, but the FIM missed 26% of the changes detected by the SCIM total scoring. The mean difference between consecutive scores was higher for the SCIM (P < 0.01). We conclude that the SCIM is a reliable disability scale and is more sensitive to changes in function in spinal cord lesion patients than the FIM. The SCIM when administered by a multidisciplinary team, may be a useful instrument for assessing changes in everyday performance in patients with spinal cord lesion.

Journal ArticleDOI
TL;DR: A scheme for a block cipher which uses only one randomly chosen permutation, F, which removes the need to store, or generate a multitude of permutations.
Abstract: We suggest a scheme for a block cipher which uses only one randomly chosen permutation,F. The key, consisting of two blocks,K 1 andK 2, is used in the following way. The message block is XORed withK 1 before applyingF, and the outcome is XORed withK 2, to produce the cryptogram block. We show that the resulting cipher is secure (when the permutation is random or pseudorandom). This removes the need to store, or generate a multitude of permutations.

Journal ArticleDOI
TL;DR: A characterization of learnability in the probabilistic concept model, solving an open problem posed by Kearns and Schapire, and shows that the accuracy parameter plays a crucial role in determining the effective complexity of the learner's hypothesis class.
Abstract: Learnability in Valiant's PAC learning model has been shown to be strongly related to the existence of uniform laws of large numbers These laws define a distribution-free convergence property of means to expectations uniformly over classes of random variables Classes of real-valued functions enjoying such a property are also known as uniform Glivenko-Cantelli classes In this paper, we prove, through a generalization of Sauer's lemma that may be interesting in its own right, a new characterization of uniform Glivenko-Cantelli classes Our characterization yields Dudley, Gine´, and Zinn's previous characterization as a corollary Furthermore, it is the first based on a Gine´, and Zinn's previous characterization as a corollary Furthermore, it is the first based on a simple combinatorial quantity generalizing the Vapnik-Chervonenkis dimension We apply this result to obtain the weakest combinatorial condition known to imply PAC learnability in the statistical regression (or “agnostic”) framework Furthermore, we find a characterization of learnability in the probabilistic concept model, solving an open problem posed by Kearns and Schapire These results show that the accuracy parameter plays a crucial role in determining the effective complexity of the learner's hypothesis class

Journal ArticleDOI
TL;DR: It was established that experimental conditions of powder preparation, electric field and stirring have a significant influence on suspension stability and deposit morphology.
Abstract: Hydroxyapatite powders were prepared by a chemical precipitation method and electrophoretically deposited on Ti6Al4V surgical alloy substrates. The powders were characterized by scanning electron microscopy (SEM), X-ray diffraction (XRD), particle size distribution and zeta potential measurements. Prior to electrophoretic deposition, anodic films were obtained on Ti6Al4V and studied by the Auger method. It was established that experimental conditions of powder preparation, electric field and stirring have a significant influence on suspension stability and deposit morphology. The deposition yield was studied at various deposition durations and applied voltages. Sintered coatings were studied by SEM and XRD.

Journal ArticleDOI
TL;DR: A new model of the truss topology design problem is presented, where the rigidity of the resulting truss with respect both to given loading scenarios and small "occasional" loads is optimized.
Abstract: We present and motivate a new model of the truss topology design problem, where the rigidity of the resulting truss with respect both to given loading scenarios and small "occasional" loads is optimized It is shown that the resulting optimization problem is a semidefinite program We derive and analyze several equivalent reformulations of the problem and present illustrative numerical examples

Book ChapterDOI
20 Jan 1997
TL;DR: A new optimized standard implementation of DES on 64-bit processors is described, which is about twice faster than the fastest known standard DES implementation on the same processor.
Abstract: In this paper we describe a fast new DES implementation. This implementation is about five times faster than the fastest known DES implementation on a (64-bit) Alpha computer, and about three times faster than than our new optimized DES implementation on 64-bit computers. This implementation uses a non-standard representation, and view the processor as a SIMD computer, i.e., as 64 parallel one-bit processors computing the same instruction. We also discuss the application of this implementation to other ciphers. We describe a new optimized standard implementation of DES on 64-bit processors, which is about twice faster than the fastest known standard DES implementation on the same processor. Our implementations can also be used for fast exhaustive search in software, which can find a key in only a few days or a few weeks on existing parallel computers and computer networks.

Book ChapterDOI
01 Nov 1997
TL;DR: Weakly contractive maps as discussed by the authors are a class of maps on closed convex sets of Hilbert spaces which are a priori degenerate in general case, and the convergence in norm of classical iterative sequences to fixed points of these maps is established.
Abstract: We introduce a class of contractive maps on closed convex sets of Hilbert spaces, called weakly contractive maps, which contains the class of strongly contractive maps and which is contained in the class of nonexpansive maps. We prove the existence of fixed points for the weakly contractive maps which are a priori degenerate in general case. We establish then the convergence in norm of classical iterative sequences to fixed points of these maps, give estimates of the convergence rate and prove the stability of the convergence with respect to some perturbations of these maps. Our results extend Banach principle previously known for strongly contractive map only.

Journal ArticleDOI
TL;DR: Results indicate that two types of multisubunit ubiquitin ligase complexes, which are connected to the protein kinase regulatory network of the cell cycle in different ways, are responsible for the specific and programmed degradation of many cell cycle regulators.

Journal ArticleDOI
TL;DR: It is found, surprisingly, that fading may enhance performance in terms of Shannon theoretic achievable rates and the effect of a random number of users per cell is investigated and it is demonstrated that randomization is beneficial.
Abstract: For pt.I see ibid., vol.43, no.6, p.1877-94 (1997). A simple idealized linear (and planar) uplink, cellular, multiple-access communication model, where only adjacent cell interference is present and all signals may experience fading is considered. Shannon theoretic arguments are invoked to gain insight into the implications on performance of the main system parameters and multiple-access techniques. The model treated in Part I (Shamai, 1997) is extended here to account for cell-site receivers that may process also the received signal at an adjacent cell site, compromising thus between the advantage of incorporating additional information from other cell sites on one hand and the associated excess processing complexity on the other. Various settings which include fading, time-division multiple access (TDMA), wideband (WB), and (optimized) fractional inter-cell time sharing (ICTS) protocols are investigated and compared. In this case and for the WB approach and a large number of users per cell it is found, surprisingly, that fading may enhance performance in terms of Shannon theoretic achievable rates. The linear model is extended to account for general linear and planar configurations. The effect of a random number of users per cell is investigated and it is demonstrated that randomization is beneficial. Certain aspects of diversity as well as some features of TDMA and orthogonal code-division multiple access (CDMA) techniques in the presence of fading are studied in an isolated cell scenario.

Journal ArticleDOI
TL;DR: It is concluded that dietary supplementation of carotenoids may act as moderate hypocholesterolemic agents, secondary to their inhibitory effect on macrophage 3-hydroxy-3-methyl glutaryl coenzyme A (HMGCoA) reductase, the rate limiting enzyme in cholesterol synthesis.

Journal ArticleDOI
TL;DR: The existence of maximally efficient strategies for the manager, i.e., strategies that drive the system into the global network optimum, is investigated and it is shown that they are met in many cases of practical interest.
Abstract: In noncooperative networks users make control decisions that optimize their individual performance objectives. Nash equilibria characterize the operating points of such networks. Nash equilibria are generically inefficient and exhibit suboptimal network performance. Focusing on routing, a methodology is devised for overcoming this deficiency, through the intervention of the network manager. The manager controls part of the network flow, is aware of the noncooperative behavior of the users and performs its routing aiming at improving the overall system performance. The existence of maximally efficient strategies for the manager, i.e., strategies that drive the system into the global network optimum, is investigated. A maximally efficient strategy of the manager not only optimizes the overall performance of the network, but also induces an operating point that is efficient with respect to the performance of the individual users (Pareto efficiency). Necessary and sufficient conditions for the existence of a maximally efficient strategy are derived, and it is shown that they are met in many cases of practical interest. The maximally efficient strategy is shown to be unique and it is specified explicitly.

Journal ArticleDOI
TL;DR: The conclusion is that development of plants with increased resistance to inhibition of growth by the osmotic effects of external salinity (in addition toincreased resistance to salt accumulation) is both feasible and desirable.
Abstract: This article reconsiders a recent hypothesis concerning the physiology of growth inhibition by salinity and its relevance to the breeding of salt-resistant crops (Munns 1993, Plant, Cell and Environment 16, pp. 15‐24). The hypothesis states that the osmotic effects of salinity on water availability will strongly and equally inhibit the growth of related species and varieties. The genotypic diversity needed for breeding increased resistance to growth inhibition by salinity is only expected to appear after weeks or months. Higher rates of salt accumulation in more sensitive varieties then lead to accelerated leaf senescence. This further inhibits new growth, as compared with more resistant varieties. Accordingly, breeders aiming to increase crop growth under salinity should focus efforts on manipulating genes which can decrease rates of salt accumulation. However, the osmotic inhibition of growth by salinity appears to involve regulatory physiological changes. Thus, some genotypic diversity might be expected. Clear evidence is presented for genotypic diversity in early growth responses to salt or PEG-induced osmotic stress, in several species and varieties. The conclusion is that development of plants with increased resistance to inhibition of growth by the osmotic effects of external salinity (in addition to increased resistance to salt accumulation) is both feasible and desirable.

Journal ArticleDOI
TL;DR: This work introduces a simple and natural strategy-selection rule, called highest cumulative reward (HCR), and shows a class of games in which HCR guarantees eventual convergence to a rationally acceptable social convention.

Proceedings ArticleDOI
09 Apr 1997
TL;DR: The main focus is to determine the impact of inaccurate information about the actual state of a node or network on the path selection process, whose goal is to identify the path that is most likely to satisfy the QoS requirements.
Abstract: We investigate the problem of routing connections with QoS requirements across one or more networks, when the information available for making routing decisions is inaccurate and expressed in some probabilistic manner. This uncertainty about the actual state of a node or network arises naturally in a number of different environments, that are reviewed in the paper. The main focus is to determine the impact of such inaccuracies on the path selection process, whose goal is then to identify the path that is most likely to satisfy the QoS requirements.

Journal ArticleDOI
TL;DR: The functionality of currently used metals and alloys is discussed with respect to stenting applications and the "shape memory" and "pseudo-elasticity" properties of Nitinol-an alloy that is being considered for the manufacturing of urologic stents-are described.
Abstract: The performance of any material in the human body is controlled by two sets of characteristics: biofunctionality and biocompatibility. With the wide range of materials available in the mid-1990s, it is relatively easy to satisfy the requirements for mechanical and physical functionality of implantable devices. Therefore, the selection of materials for medical applications is usually based on considerations of biocompatibility. When metals and alloys are considered, the susceptibility of the material to corrosion and the effect the corrosion has on the tissue are the central aspects of biocompatibility. Corrosion resistance of the currently used 316L stainless steel, cobalt-chromium, and titanium-based implant alloys relies on their passivation by a thin surface layer of oxide. Stainless steel is the least corrosion resistant, and it is used for temporary implants only. The titanium and Co-Cr alloys do not corrode in the body; however, metal ions slowly diffuse through the oxide layer and accumulate in the...