scispace - formally typeset
Search or ask a question

Showing papers in "Sadhana-academy Proceedings in Engineering Sciences in 1996"


Journal ArticleDOI
TL;DR: In this paper, a chance-constrained linear programming for-mulation for reservoir operation of a multipurpose reservoir is presented, where the release policy is defined by a chance constraint that the probability of irrigation release in any period equalling or exceeding the irrigation demand is at least equal to a specified value P (called reliability level).
Abstract: This paper presents a chance-constrained linear programming for- mulation for reservoir operation of a multipurpose reservoir. The release policy is defined by a chance constraint that the probability of irrigation release in any period equalling or exceeding the irrigation demand is at least equal to a specified value P (called reliability level). The model determines the maximum annual hydropower produced while meeting the irrigation demand at a specified reliability level. The model considers variation in reservoir water level eleva- tion and also the operating range within which the turbine operates. A linear approximation for nonlinear power production function is assumed and the so- lution obtained within a specified tolerance limit. The inflow into the reservoir is considered random. The chance constraint is converted into its deterministic equivalent using a linear decision rule and inflow probability distribution. The model application is demonstrated through a case study.

29 citations


Journal ArticleDOI
TL;DR: It is shown that a multilayer feedforward neural network can be trained to learn to make an opening bid with a new hand, and the need for a hierarchical architecture to deal with bids at all levels is discussed.
Abstract: The objective of this study is to explore the possibility of capturing the reasoning process used in bidding a hand in a bridge game by an artificial neural network. We show that a multilayer feedforward neural network can be trained to learn to make an opening bid with a new hand. The game of bridge, like many other games used in artificial intelligence, can easily be represented in a machine. But, unlike most games used in artificial intelligence, bridge uses subtle reasoning over and above the agreed conventional system, to make a bid from the pattern of a given hand. Although it is difficult for a player to spell out the precise reasoning process he uses, we find that a neural network can indeed capture it. We demonstrate the results for the case of one-level opening bids, and discuss the need for a hierarchical architecture to deal with bids at all levels.

16 citations


Journal ArticleDOI
TL;DR: The TIFR phoneme-to-speech synthesizer which utilizes a standard formant synthesizer as a speech production model is described and the methodology for evolving and organizing formant-based rules to drive the used synthesizer is emphasized.
Abstract: Synthesis of continuous and unlimited speech is a matter of theoretical as well as technological interest. Independent efforts are needed for synthesis in Indian languages which are substantially different from English and other European languages. The paper discusses basic synthesis issues like text-to-phoneme and phoneme-to-speech conversion and incorporation of prosody. The three commonly adopted methodologies of concatenation, formant and articulatory syntheses are compared. The TIFR phoneme-to-speech synthesizer which utilizes a standard formant synthesizer as a speech production model is described and the methodology for evolving and organizing formant-based rules to drive the used synthesizer is emphasized. The results of some perception tests are reported and a few potential applications are suggested. The direction of the future work for enhancing the quality and expanding the scope of the synthesizer is indicated.

14 citations


Journal ArticleDOI
TL;DR: The phenomenon of reusability is illustrated, an evolutionary verification process model is presented, an automated reuse of proofs in software verification is presented and theoretical and technical aspects are discussed.
Abstract: This paper presents a method for automated reuse of proofs in software verification. Proofs about programs as well as proof attempts are used to guide the verification of modified programs, particularly of program corrections. We illustrate the phenomenon of reusability, present an evolutionary verification process model and discuss theoretical and technical aspects. Finally, we report on case studies with an implementation of this method in the Karlsruhe Interactive Verifier (KIV).

10 citations


Journal ArticleDOI
TL;DR: In this article, the authors highlight the suitability of the hypersonic shock tunnel at the Indian Institute of Science (IISc) for future space application studies in India.
Abstract: Real gas effects dominate the hypersonic flow fields encountered by modern day hypersonic space vehicles Measurement of aerodynamic data for the design applications of such aerospace vehicles calls for special kinds of wind tunnels capable of faithfully simulating real gas effects A shock tunnel is an established facility commonly used along with special instrumentation for acquiring the data for this purpose within a short time period The hypersonic shock tunnel (HST1), established at the Indian Institute of Science (IISc) in the early 1970s, has been extensively used to measure the aerodynamic data of various bodies of interest at hypersonic Mach numbers in the range 4 to 13 Details of some important measurements made during the period 1975–1995 along with the performance capabilities of the HST1 are presented in this review In view of the re-emergence of interest in hypersonics across the globe in recent times, the present review highlights the suitability of the hypersonic shock tunnel at the IISc for future space application studies in India

10 citations


Journal ArticleDOI
TL;DR: In this article, a simulation-optimization procedure is presented for evaluating the extent of interbasin transfer of water in the Peninsular Indian river system consisting of 15 reservoirs on four river basins.
Abstract: A simulation-optimization procedure is presented for evaluating the extent of interbasin transfer of water in the Peninsular Indian river system consisting of 15 reservoirs on four river basins. A system-dependent simulation model is developed incorporating the concept of reservoir zoning to facilitate releases and transfers. The simulation model generates a larger number of solutions which are then screened by the optimization model. The Box complex nonlinear programming algorithm is used for the optimization. The performance of the system is evaluated through simulation with the optimal reservoir zones with respect to four indices, reliability, resiliency, vulnerability and deficit ratio. The results indicate that by operating the system of 15 reservoirs as a single unit the existing utilization of water may be increased significantly.

9 citations


Journal ArticleDOI
TL;DR: In this article, the underlying mechanics of the finite element method as applied to structural analysis is explored in paradigmatic terms, and it is shown that the stress correspondence paradigm has the most explanatory power and that it can be axiomatized from a very basic principle, the Hu-Washizu theorem, which is a variation of the least action principle.
Abstract: The underlying mechanics of the finite element method as applied to structural analysis is explored in paradigmatic terms. It is shown that the stress correspondence paradigm has the most explanatory power and that it can be axiomatized from a very basic principle, the Hu-Washizu theorem, which is a variation of the least action principle. Numerical experiments are presented to show that the predictions based on analytical quantification from the stress correspondence paradigm are verifiable.

8 citations


Journal ArticleDOI
TL;DR: In this paper, a new class of 1-writer shared variables, called weakly atomic variables, is defined, and an elegant general method of constructing atomic variables from weakly-atomic ones is presented.
Abstract: A new class of 1-writer shared variables, calledweakly atomic variables, is defined, and an elegant general method of constructing atomic variables from weakly atomic ones is presented in this paper. Four examples of atomic variable constructions that use this method are described. Two of these constructions are new.

7 citations


Journal ArticleDOI
TL;DR: The aspects of AFSA, especially its power of morphological parsing of words in a computationally attractive manner, has been discussed and implementation notes based on object-oriented programming principles has been provided.
Abstract: An NLP system for Indian languages should have a lexical subsystem that is driven by a morphological analyzer. Such an analyzer should be able to parse a word into its constituent morphemes and obtain lexical projection of the word as a unification of the projections of the constituent morphemes. Lexical projections considered here aref-structures of the Lexical Functional Grammar (LFG). A formalism has been proposed, by which the lexicon writer may specify the lexicon in four levels. The specifications are compiled into a stored lexical knowledge base on one hand and a formulation of derivational morphology called Augmented Finite State Automata (AFSA) on the other to achieve a compact lexical representation. The aspects of AFSA, especially its power of morphological parsing of words in a computationally attractive manner, has been discussed. An additional utility of the AFSA, in the form of spelling error corrector, has also been discussed. Bangla, or Bengali is considered as a case study.

7 citations


Journal ArticleDOI
TL;DR: In this paper, a unified method is presented for modeling delaminated stiffened laminated composite shells, synthesizing accurate multiple post-buckling solution paths under compressive loading, and predicting delamination growth.
Abstract: In this paper, a unified method is presented: (i) to model delaminated stiffened laminated composite shells; (ii) for synthesising accurate multiple post-buckling solution paths under compressive loading; and (iii) for predicting delamination growth. A multi-domain modelling technique is used for modelling the delaminated stiffened shell structures. Error-free geometrically nonlinear element formulations — a 2-noded curved stiffener element (BEAM2) and a 3-noded shell element (SHELL3) — are used for the finite element analysis. An accurate and simple automated solution strategy based on Newton type iterations is used for predicting the general geometrically nonlinear and postbuckling behaviour of structures. A simple method derived from the 3-dimensionalJ-integral is used for computing the pointwise energy release rate at the delamination front in the plate/shell models. Finally, the influence of post-buckling structural behaviour and the delamination growth on each other has been demonstrated.

6 citations


Journal ArticleDOI
TL;DR: An algorithm called a Hamming scan was developed recently for obtaining sequences with large merit factors and is adopted here to obtain such sequences within which there are nontrivial segments of large merit factor.
Abstract: An algorithm called a Hamming scan was developed recently for obtaining sequences with large merit factors and is adopted here to obtain such sequences within which there are nontrivial segments of large merit factors. Correlative detection of the return signal can be based simultaneously on the entire sequence and its segments with large merit factors. Such a coincidence detection scheme can be characterized by a Schur merit factor of the sequence. Sequences with large Schur merit factors are listed.

Journal ArticleDOI
TL;DR: A channel equalization method based on wavelet packets based on a minimum square variance algorithm for adaptively choosing the delay has been proposed and has been shown to perform as desired analytically in a simple delay channel case.
Abstract: Recently, considerable amount of attention is being given to the field of wavelets and wavelet packets. It has found numerous applications in signal representation, image compression and applied mathematics.

Journal ArticleDOI
TL;DR: In this paper, higher-order and conventional first-order shear deformation theories are used to study the impact response of composite sandwich shells, which is based on Donnell's shallow shell theory.
Abstract: In the present investigation, higher-order and conventional first-order shear deformation theories are used to study the impact response of composite sandwich shells. The formulation is based on Donnell’s shallow shell theory. Nine-noded Lagrangian elements are used for the finite element formulation. A modified Hertzian contact law is used to calculate the contact force. The results obtained from the present investigation are found to compare well with those existing in the open literature. The numerical results are presented to study the changes in the impact response due to the increase of core depth from zero to some specified value and the changes in core stiffness for a particular core depth.

Journal ArticleDOI
TL;DR: In this paper, a new mathematical formulation has been developed for the estimation of the two-dimensional orbital shift of INSAR based on the fringe line pattern in the interferogram of flat earth.
Abstract: Terrain height estimation through spaceborne interferometric synthetic aperture radar (INSAR) requires accurate knowledge of the orbital shift between repeat passes. Mathematical models are available for the estimation of horizontal orbital shift. However, in reality, the orbital shift between repeat passes is modelled as two-dimensional for the same azimuth scanline. In this paper, a new mathematical formulation has been developed for the estimation of the two-dimensional orbital shift of INSAR based on the fringe line pattern in the interferogram of flat earth.

Journal ArticleDOI
TL;DR: This article considers a Lagrangian relaxation based method for scheduling job shops and extends it to obtain a scheduling methodology for a real-world flexible manufacturing system with centralized material handling.
Abstract: Recently, efficient scheduling algorithms based on Lagrangian relaxation have been proposed for scheduling parallel machine systems and job shops. In this article, we develop real-world extensions to these scheduling methods. In the first part of the paper, we consider the problem of scheduling single operation jobs on parallel identical machines and extend the methodology to handle multiple classes of jobs, taking into account setup times and setup costs. The proposed methodology uses Lagrangian relaxation and simulated annealing in a hybrid framework. In the second part of the paper, we consider a Lagrangian relaxation based method for scheduling job shops and extend it to obtain a scheduling methodology for a real-world flexible manufacturing system with centralized material handling.

Journal ArticleDOI
TL;DR: A newh-refinement strategy based on weighted average energy norm and enhanced by strain energy density ratios is proposed and two typical problems are solved to demonstrate its efficiency over the conventional refinement strategy in the relative improvement of global asymptotic convergence.
Abstract: The theory and mathematical bases ofa-posteriori error estimates are explained. It is shown that theMedial Axis of a body can be used to decompose it into a set of mutually non-overlapping quadrilateral and triangular primitives. A mesh generation scheme used to generate quadrilaterals inside these primitives is also presented together with its relevant implementation aspects. A newh-refinement strategy based on weighted average energy norm and enhanced by strain energy density ratios is proposed and two typical problems are solved to demonstrate its efficiency over the conventional refinement strategy in the relative improvement of global asymptotic convergence.

Journal ArticleDOI
TL;DR: Real-Time Future Interval Logic as discussed by the authors is a temporal logic in which formulae have a natural graphical representation, resembling timing diagrams, and is invariant under real-time stuttering, properties that facilitate proof methods based on abstraction and refinement.
Abstract: Real-Time Future Interval Logic is a temporal logic in which formulae have a natural graphical representation, resembling timing diagrams. It is a dense real-time logic that is based on two simple temporal primitives:interval modalities for the purely qualitative part andduration predicates for the quantitative part. This paper describes the logic and gives a decision procedure for satisfiability by reduction to the emptiness problem for Timed Buchi Automata. This decision procedure forms the core of an automated proof-checker for the logic. The logic does not admit instantaneous states, and is invariant under real-time stuttering, properties that facilitate proof methods based on abstraction and refinement. The logic appears to be as strong as one can hope for without sacrificing elementary decidability. Two natural extensions of the logic, along lines suggested in the literature, lead to either non-elementariness or undecidability.

Journal ArticleDOI
TL;DR: The architecture of a second-generation expert system for the automated design of microprocessor-based systems is presented, integrating a device handbook knowledge base with a shallow expert system, to provide resilience and deep reasoning capability.
Abstract: We present the architecture of a second-generation expert system for the automated design of microprocessor-based systems. A novel feature is the integration of a device handbook knowledge base with a shallow expert system, to provide resilience and deep reasoning capability. The design tasks, knowledge sources, behaviour modelling scheme, behaviour mapping algorithms, and inter-layer communication are briefly described.

Journal ArticleDOI
TL;DR: A novel approach for lossless as well as lossy compression of monochrome images using Boolean minimization is proposed, which gives better compression ratio and is relatively slower while the decompression time is comparable to that of JPEG.
Abstract: A novel approach for lossless as well as lossy compression of monochrome images using Boolean minimization is proposed. The image is split into bit planes. Each bit plane is divided into windows or blocks of variable size. Each block is transformed into a Boolean switching function in cubical form, treating the pixel values as output of the function. Compression is performed by minimizing these switching functions using ESPRESSO, a cube based two level function minimizer. The minimized cubes are encoded using a code set which satisfies the prefix property. Our technique of lossless compression involves linear prediction as a preprocessing step and has compression ratio comparable to that of JPEG lossless compression technique. Our lossy compression technique involves reducing the number of bit planes as a preprocessing step which incurs minimal loss in the information of the image. The bit planes that remain after preprocessing are compressed using our lossless compression technique based on Boolean minimization. Qualitatively one cannot visually distinguish between the original image and the lossy image and the value of mean square error is kept low. For mean square error value close to that of JPEG lossy compression technique, our method gives better compression ratio. The compression scheme is relatively slower while the decompression time is comparable to that of JPEG.

Journal ArticleDOI
TL;DR: The results indicate that the SVD method is a reliable approach for estimation of attractor dimension at moderate signal to noise ratios and emphasises the importance of SVD approach to EEG analysis.
Abstract: This paper describes a novel application of singular value decomposition (SVD) of subsets of the phase-space trajectory for calculation of the attractor dimension of a small data set. A certain number of local centres (M) are chosen randomly on the attractor and an adequate number of nearest neighbours (q=50) are ordered around each centre. The local intrinsic dimension of a local centre is determined by the number of significant singular values and the attractor dimension (D2) by the average of the local intrinsic dimensions of the local centres. The SVD method has been evaluated for model data and EEG. The results indicate that the SVD method is a reliable approach for estimation of attractor dimension at moderate signal to noise ratios. The paper emphasises the importance of SVD approach to EEG analysis.

Journal ArticleDOI
TL;DR: A clock recovery technique based on discrete-time processing of the demodulated baseband signal and a new maximum likelihood sequence estimator for the data that uses a whitening filter followed by a Viterbi decoder is proposed.
Abstract: Demodulation of Gaussian Minimum Shift Keying (GMSK) using a limiter-discriminator is a low complexity alternative to coherent demodulation. This so-called digital FM demodulation is followed by clock recovery, sampling, and thresholding. Conventionally, clock recovery is done in hardware, and matched filtering is usually not possible when the Gaussian pulse is wider than a bit duration. We propose a clock recovery technique based on discrete-time processing of the demodulated baseband signal. This technique couples very nicely with a new maximum likelihood sequence estimator for the data that uses a whitening filter followed by a Viterbi decoder. The entire detection algorithm can be implemented in an efficient manner on a Digital Signal Processor (DSP). Computer simulation results are presented to show that the new algorithm performs better than the conventional slicer by as much as 5.5 dB.

Journal ArticleDOI
TL;DR: It is shown that with the knowledge of these instants it is possible to perform prosodic manipulation of speech and also an accurate analysis of speech for extracting the source and system characteristics.
Abstract: The objective of this paper is to demonstrate the importance of position of the analysis time window in time-frequency analysis of speech signals. Speech signals contain information about the time varying characteristics of the excitation source and the vocal tract system. Resolution in both the temporal and spectral domains is essential for extracting the source and system characteristics from speech signals. It is not only the resolution, as determined by the analysis window in the time domain, but also the position of the window with respect to the production characteristics that is important for accurate analysis of speech signals. In this context, we propose an event-based approach for speech signals. We define the occurrence of events at the instants corresponding to significant excitation of the vocal tract system. Knowledge of these instants enable us to place the analysis window suitably for extracting the characteristics of the excitation source and the vocal tract system even from short segments of data. We present a method of extracting the instants of significant excitation from speech signals. We show that with the knowledge of these instants it is possible to perform prosodic manipulation of speech and also an accurate analysis of speech for extracting the source and system characteristics.

Journal ArticleDOI
TL;DR: The use of a new form of information in the form ofh* set (the set of optimum cost values of all nodes of the cluster) is discussed and an algorithm for using the information that is more effective than A*.
Abstract: Learning for problem solving involves acquisition and storage of relevant knowledge from past problem solving instances in a domain in such a form that the information can be used to effectively solve subsequent problems in the same domain. Our interest is in the role of learning in problem solving systems that solve problems optimally. Such problems can be solved by an informed search algorithm like A*. Learning a stronger heuristic function leads to more effective problem solving. A set of arbitrary features of the domain induce a clustering of the state space. The heuristic information associated with each cluster may be learned. We discuss the use of a new form of information in the form ofh* set (the set of optimum cost values of all nodes of the cluster) and present an algorithm for using the information that is more effective than A*. A possibilistic (fuzzy set theoretic) extension of this algorithm is also presented. This version can handle incomplete information and is expected to find solutions faster in the average case with controlled relaxation in the optimality guarantee. We also discuss how to make the best use of the features, when the system has memory restrictions that limit the number of classes that can be stored.

Journal ArticleDOI
TL;DR: This work presents a machine learning system, that uses inductive logic programming techniques to learn how to identify transmembrane domains from amino acid sequences, that facilitates the use of operators such as ‘contains’, that act on entire sequences, rather than on individual elements of a sequence.
Abstract: We present our machine learning system, that uses inductive logic programming techniques to learn how to identify transmembrane domains from amino acid sequences. Our system facilitates the use of operators such as ‘contains’, that act on entire sequences, rather than on individual elements of a sequence. The prediction accuracy of our new system is around 93%, and this compares favourably with earlier results.

Journal ArticleDOI
TL;DR: An Intelligent Decision Support System (IDSS) which aids the manager in decision making and drawing up a feasible schedule of activities while taking into consideration the constraints of resources and time, will have a considerable impact on the efficient management of the project as mentioned in this paper.
Abstract: Management of large projects, especially the ones in which a major component of R&D is involved and those requiring knowledge from diverse specialised and sophisticated fields, may be classified as semi-structured problems. In these problems, there is some knowledge about the nature of the work involved, but there are also uncertainties associated with emerging technologies. In order to draw up a plan and schedule of activities of such a large and complex project, the project manager is faced with a host of complex decisions that he has to take, such as, when to start an activity, for how long the activity is likely to continue, etc. An Intelligent Decision Support System (IDSS) which aids the manager in decision making and drawing up a feasible schedule of activities while taking into consideration the constraints of resources and time, will have a considerable impact on the efficient management of the project. This report discusses the design of an IDSS that helps in project planning phase through the scheduling phase. The IDSS uses a new project scheduling tool, the Project Influence Graph (PIG).

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the propagation of waves in an elastic layer containing voids and showed that the velocity of the propagation decreases due to the presence of voids in the material medium of the layer and the voids cause dispersion of the general waveform.
Abstract: The present paper investigates the propagation of waves in an elastic layer containing voids. Numerical calculations and discussions indicate that the velocity of the propagation of waves decreases due to the presence of voids in the material medium of the layer and the voids cause dispersion of the general waveform.

Journal ArticleDOI
Amar Isli1
TL;DR: In this paper, a method for translating any formula of LPTL into an equivalent Buchi alternating automaton is given. But this method is restricted to the case of Buchi.
Abstract: We first give a method for simulating, in the case of Buchi, an alternating automaton by a usual nondeterministic one. Then, to make the satisfiability problem of Linear Propositional Temporal Logic (LPTL) use this result, we give a method for translating any formula of this logic into an equivalent Buchi alternating automaton.

Journal ArticleDOI
TL;DR: A new graphic language which can serve, for instance, as models for VLSI and control systems, and its primitives are based on standard timing diagrams, and this is a great advantage over other formalisms since designers can rapidly master it.
Abstract: We present a new graphic language which can serve, for instance, as models for VLSI and control systems. Its primitives are based on standard timing diagrams, and this is a great advantage over other formalisms since designers can rapidly master it. The semantics is rigorously defined in the formalism of the theory of automata on infinite words. Using this formalism, we are able to give a rather precise upper-bound on the expressive power of our graphic language in terms of a language theoretic measure, theconcatenation level. A detailed example is presented.

Journal ArticleDOI
TL;DR: While conventional deterministic annealing make use of the Euclidean squared error distance measure, this work has developed an algorithm that can be used for clustering with Hamming distance as thedistance measure, which is required in the error correcting scenario.
Abstract: We address the problem of designing codes for specific applications using deterministic annealing. Designing a block code over any finite dimensional space may be thought of as forming the corresponding number of clusters over the particular dimensional space. We have shown that the total distortion incurred in encoding a training set is related to the probability of correct reception over a symmetric channel. While conventional deterministic annealing make use of the Euclidean squared error distance measure, we have developed an algorithm that can be used for clustering with Hamming distance as the distance measure, which is required in the error correcting scenario.

Journal ArticleDOI
TL;DR: In this article, the problem of model reference adaptive control (MRAC) of asymptotically stable plants of unknown order with zeros located anywhere in thes-plane except at the origin is addressed.
Abstract: The problem addressed is one of model reference adaptive control (MRAC) of asymptotically stable plants of unknown order with zeros located anywhere in thes-plane except at the origin The reference model is also asymptotically stable and lacking zero(s) ats = 0 The control law is to be specified only in terms of the inputs to and outputs of the plant and the reference model For inputs from a class of functions that approach a non-zero constant, the problem is formulated in an optimal control framework By successive refinements of the sub-optimal laws proposed here, two schemes are finally designed These schemes are characterized by boundedness, convergence and optimality Simplicity and total time-domain implementation are the additional striking features Simulations to demonstrate the efficacy of the control schemes are presented