scispace - formally typeset
Search or ask a question

Showing papers by "Techno India published in 2007"


Journal ArticleDOI
TL;DR: In this article, a utility-interactive hybrid distributed generation scheme, with reactive power compensation feature, is presented, where the basic objective is to realize a reliable power supply for a remotely located critical load.
Abstract: A new, utility-interactive hybrid distributed generation scheme, with reactive power compensation feature, is presented. The basic objective is to realize a reliable power supply for a remotely located critical load. Fuel cell (FC) stack and photovoltaic (PV) array are considered as energy sources. These sources can be operated independently or in conjunction as per the requirement. The control logic employed ensures maximum utilization of the PV array, resulting in optimum operational costs. Only one inverter is used to connect both the FC stack and the PV array to the utility. Apart from feeding active power into the grid, the system can also provide reactive power compensation. Active and reactive power can be independently controlled by controlling the inverter's power angle and modulation index, respectively. This provides more flexibility in control and operation. All the details of this work, including power and control circuits, MATLAB simulation results, and experimental results, are presented.

76 citations


Journal ArticleDOI
TL;DR: In this article, a new crystal 2-aminopyridinium para-nitro benzoate (C 5 H 7 N 2 + ·C 7 H 4 NO 4 − ) has been synthesized by the reaction of 2-amino-polycyclic acid (2-AMO) with para -nitrobenzoic acid by slow evaporation method.

63 citations


Proceedings ArticleDOI
16 Aug 2007
TL;DR: This survey provides a review of the whole grid fault tolerance area, mainly from the perspective of check point, and proposes a task level fault tolerance that can upgrade grid performance significantly at only a moderate in extra resources or scheduling delays in a risky grid computing environment.
Abstract: One motivation of grid computing is to aggregate the power of widely distributed resources, and provide non-trivial services to users. To achieve this goal, an efficient grid fault tolerance system is an essential part of the grid. Rather than covering the whole grid fault tolerance area, this survey provides a review of the subject mainly from the perspective of check point. In this review the challenges for fault tolerance are identified. In grid environments, execution failures can occur for various reasons such as network failure, overloaded resource conditions, or non-availability of required software components. Thus, fault-tolerant systems should be able to identify and handle failures and support reliable execution in the presence of concurrency and failures. In scheduling a large number of user jobs for parallel execution on an open-resource grid system, the jobs are subject to system failures or delays caused by infected hardware, software vulnerability, and distrusted security policy. In this paper we propose a task level fault tolerance. Task-level techniques mask the effects of the execution failure of tasks. Four task level techniques are retry, alternate resource, check point and replication. Check point technique strategy achieves optimal load balance across different grid sites. These fault tolerance task level techniques can upgrade grid performance significantly at only a moderate in extra resources or scheduling delays in a risky grid computing environment.

53 citations


Journal ArticleDOI
TL;DR: Aspergillus oryzae IFO-30103 produced very high levels of α-amylase by modified solid-state fermentation (mSSF) compared to SSF carried out in enamel coated metallic trays utilizing wheat bran as substrate.
Abstract: GROWTEK bioreactor was used as modified solid-state fermentor to circumvent many of the problems associated with the conventional tray reactors for solid-state fermentation (SSF). Aspergillus oryzae IFO-30103 produced very high levels of α-amylase by modified solid-state fermentation (mSSF) compared to SSF carried out in enamel coated metallic trays utilizing wheat bran as substrate. High α-amylase yield of 15,833 U g−1 dry solid in mSSF were obtained when the fungus were cultivated at an initial pH of 6.0 at 32°C for 54 h whereas α-amylase production in SSF reached its maxima (12,899 U g−1 dry solid ) at 30°C after 66 h of incubation. With the supplementation of 1% NaNO3, the maximum activity obtained was 19,665 U g−1 dry solid (24% higher than control) in mSSF, whereas, in SSF maximum activity was 15,480 U g−1 dry solid in presence of 0.1% Triton X-100 (20% higher than the control).

39 citations


Proceedings ArticleDOI
16 Apr 2007
TL;DR: A technique to predict controller delay as a function of the loop unrolling factor is proposed, and this prediction is used with other search space pruning methods to automatically determine the optimal loop unroll factor that results in a controller whose delay fits into a specified time budget, without an exhaustive exploration.
Abstract: Loop unrolling is a well-known compiler optimization that can lead to significant performance improvements. When used in High Level Synthesis (HLS) unrolling can affect the controller complexity and delay. We study the effect of the loop unrolling factor on the delay of controllers generated during HLS. We propose a technique to predict controller delay as a function of the loop unrolling factor, and use this prediction with other search space pruning methods to automatically determine the optimal loop unrolling factor that results in a controller whose delay fits into a specified time bud-get, without an exhaustive exploration. Experimental results indicate delay predictions that are close to measured delays, yet significantly faster than exhaustive synthesis.

36 citations


Journal ArticleDOI
TL;DR: In this article, the authors have discussed issues in the perspective of hardware and software requirements for efficient offshoring, aiming to achieve higher precision, protection, and throughput by applying core-computing techniques to the existing practices of outsourcing.
Abstract: Internet technology has impelled us to develop faith in the modern practices of business, commerce, and trade. Offshoring has been viewed as a global phenomenon on the economic frontier. While new technologies need to be framed, stopgap arrangements in the form of transient solutions to upgrade the current systems are also desired. Newer regulations and multi-jurisdictional compliance have profound impacts on the growth of outsourcing projects. The development of new technological solutions must challenge the myth that legislation and statutory practices are the only possible mechanisms to counter the unscrupulous activities in the context of outsourcing. A change in the outlook toward such methodologies is essential to shed away the technological inertia and latency. This article opens up discussion issues in the perspective of hardware and software requirements for efficient offshoring. The aim is to achieve higher precision, protection, and throughput by applying core-computing techniques to the existing practices of outsourcing.

28 citations


Journal ArticleDOI
TL;DR: This chapter introduces a similarity preserving function called sequence and set similarity measure S3M that captures both the order of occurrence of page visits as well as the content of pages and proposes a new clustering algorithm, SeqPAM for clustering sequential data.
Abstract: With the growth in the number of Web users and necessity for making information available on the Web, the problem of Web personalization has become very critical and popular. Developers are trying to customize a Web site to the needs of specific users with the help of knowledge acquired from user navigational behavior. Since user page visits are intrinsically sequential in nature, efficient clustering algorithms for sequential data are needed. In this chapter, we introduce a similarity preserving function called sequence and set similarity measure S3M that captures both the order of occurrence of page visits as well as the content of pages. We conducted pilot experiments comparing the results of PAM, a standard clustering algorithm, with two similarity measures: Cosine and S3M. The goodness of the clusters resulting from both the measures was computed using a cluster validation technique based on average levensthein distance. Results on pilot dataset established the effectiveness of S3M for sequential data. Based on these results, we proposed a new clustering algorithm, SeqPAM for clustering sequential data. We tested the new algorithm on two datasets namely, cti and msnbc datasets. We provided recommendations for Web personalization based on the clusters obtained from SeqPAM for msnbc dataset.

28 citations


Journal ArticleDOI
TL;DR: Extensive experimental results supported with analytical formulation establish the effectiveness of RBFFCA based pattern classifier and prove it as an efficient and cost-effective alternative for the classification problem.
Abstract: A hybrid learning algorithm, termed as RBFFCA, for the solution of classification problems with real valued inputs is proposed. It comprises an integration of the principles of radial basis function (RBF) and fuzzy cellular automata (FCA). The FCA has been evolved through genetic algorithm (GA) formulation to perform pattern classification task. The versatility of the proposed hybrid scheme is illustrated through its application in diverse fields. Simulation results conducted on benchmark database show that the hybrid pattern classifier achieves excellent performance both in terms of classification accuracy and learning efficiency. Extensive experimental results supported with analytical formulation establish the effectiveness of RBFFCA based pattern classifier and prove it as an efficient and cost-effective alternative for the classification problem.

18 citations


Journal ArticleDOI
TL;DR: In this paper, the motion of a self-propelling micro-organism symmetrically located in a rectangular channel containing viscous fluid has been studied by considering the peristaltic and longitudinal waves travelling along the walls of the channel.
Abstract: The motion of a self propelling micro-organism symmetrically located in a rectangular channel containing viscous fluid has been studied by considering the peristaltic and longitudinal waves travelling along the walls of the channel. Theexpressions for the velocity of the micro-organism and time average flux have been obtained under long wavelength approximation by taking into account the viscosity variation of the fluid across the channel. Particular cases for constant viscosity and when it is represented by a step function have been discussed. It has been observed that the velocity of the micro-organism decreases as the viscosity of the peripheral layer increases and its thickness decreases.

18 citations


Proceedings ArticleDOI
01 Sep 2007
TL;DR: In this article, the effectiveness of different cooling techniques used for outdoor electronics was analyzed and compared in an outdoor electronic enclosure, including white oil paint on the outer surface, radiation shield, double-walled enclosure, fans for internal air circulation and air-to-air heat exchangers.
Abstract: The thermal management of an outdoor electronic enclosure can be quite challenging due to the additional thermal load from the sun and the requirement of having an air-sealed enclosure. It is essential to consider the effect of solar heating loads in the design process; otherwise, it can shorten the life expectancy of the electronic product or lead to catastrophic failure. The main objective of this work is to analyze and compare the effectiveness of different cooling techniques used for outdoor electronics. Various cooling techniques were compared like special coats and paints on the outer surface, radiation shield, double-walled enclosure, fans for internal air circulation and air-to-air heat exchangers. A highly simplified, typical outdoor system was selected for this study measuring approximately 300times300times400 mm (WxLxH). Solar radiation was incident on 3 sides of the enclosure. There were 8 equally spaced PCBs inside the enclosure dissipating 12.5 W each uniformly (100 watts total). A computational fluid dynamics (CFD) model of the system was built and analyzed. This was followed by building a mock-up of the system and conducting experiments to validate the CFD model. It was found that some of the simplest cooling techniques like white oil paint on the outer surface can significantly reduce the impact of solar loads. Adding internal circulation fans can also be very effective. Using air-to-air heat exchangers was found to be the most effective solution although it is more complex and costly.

13 citations


Proceedings ArticleDOI
22 Jun 2007
TL;DR: This report presents an analysis on the support for VoIP by an alternative virtual MIMO uplink mechanism that is called simultaneous coordinated channel access (SCCA) and the use of SCCA as a complimentary mechanism in the uplink to the MR-MRA and PSMP schemes to improve the throughput for multiuser transmissions.
Abstract: The high throughput (HT) 802.11n architecture is mainly aimed at exchanging large chunks of data using bit rates of 216 Mbps in 20 MHz and 432 Mbps (or even higher) in 40 MHz bandwidth respectively. However, for real-time data streams with small payloads like VoIP the performance of 802.11n is not very impressive and performs worse than 802.11a, as we show in this report. In order to overcome this and to accelerate QoS packet transmission to multiple users simultaneously, TGnSync proposed a mechanism for multiuser packet aggregation in the downlink with multiple uplinking responses known as multi response multiple receiver aggregate (MR-MRA). With the closure of the TGnSync group and the emergence of the EWC, an Intel backed 802.lln group, a second mechanism known as power save multiple poll (PSMP) to describe the down link transmission (DLT) and up link transmission (ULT) was introduced [2]. This report presents an analysis on the support for VoIP by an alternative virtual MIMO uplink mechanism that we call simultaneous coordinated channel access (SCCA) and the use of SCCA as a complimentary mechanism in the uplink to the MR-MRA and PSMP schemes to improve the throughput for multiuser transmissions. We show that the combined mechanisms MRA- SCCA and PSMP-SCCA are attractive solutions to meet the demand for QoS capacity over WLANs.

Journal ArticleDOI
TL;DR: In this paper, a class of polynomials T (α+s−1) kn (x) defined by Mittal is defined and a finite summation formulae for (1.6) is derived using operational techniques.
Abstract: The object of this paper is to establish some generating relations by using operational formulae for a class of polynomials T (α+s−1) kn (x) defined by Mittal. We have also derived finite summation formulae for (1.6) by employing operational techniques. In the end several special cases are discussed.

Proceedings ArticleDOI
01 Dec 2007
TL;DR: It is argued that tightening regulations alone will not improve e-waste management and disposal; instead, the regulations must acknowledge and facilitate an international division of labor in the e-Waste recycling industry to address the need for employment in the informal sector.
Abstract: One consequence of the increased ubiquity and shortening product cycles of information and communication technologies (ICTs) is the proliferation of hazardous electronic waste (e-waste). Despite governance mechanisms to control the management and disposal of e-waste, significant quantities are exported from developed to developing countries, where they are processed in unsafe conditions in the informal sector. To understand this phenomenon, this paper examines e-waste management and disposal in India. It shows that so long as benefits from transnational e-waste flows exceed the benefits of compliance with regulations, such flows will continue. It argues that tightening regulations alone will not improve e-waste management and disposal; instead, the regulations must acknowledge and facilitate an international division of labor in the e-waste recycling industry. By doing so, they will not only address the need for employment in the informal sector; but also promote a cleaner environment.

Proceedings ArticleDOI
14 Aug 2007
TL;DR: The facility to maintain a database of the retinal photographs and associated data of any patient, taken on a regular basis, scrutinized for the prediction of the diseases may easily be incorporated.
Abstract: The Fundus images of retina of human eye can provide valuable information about human health and open up a window of unforeseeable opportunities. In this respect, one can systematically assesses digital retinal photographs, to predict chronic diseases. This eliminates the need for manual assessment of ophthalmic images in diagnostic practices. It is also possible to detect certain type of cancers and cataract in their early stages in addition to diseases like hypertension, stroke and serious organ malfunctioning in diabetic patients. The facility to maintain a database of the retinal photographs and associated data of any patient, taken on a regular basis, scrutinized for the prediction of the diseases may easily be incorporated.

Proceedings ArticleDOI
26 Apr 2007
TL;DR: In this article, the authors reported three-dimensional trapping of charge stabilized colloidal particles near a charged surface (glass boundary), using a low numerical aperture objective, which arises due to balancing action between optical forces and electrostatic repulsions.
Abstract: We report three-dimensional trapping of charge stabilized colloidal particles near a charged surface (glass boundary), using a low numerical aperture objective. The trapping arises due to balancing action between optical forces and electrostatic repulsions. The observation of 3-d colloidal clusters and a linear array of colloidal microspheres along the trap beam and the possible reasons thereof are also discussed.

Proceedings ArticleDOI
01 Dec 2007
TL;DR: In this paper, the authors report the results of their experimental studies on interaction of synthetic jets with a bulk flow over a heated surface in a low profile channel as well as its impact on overall heat transfer enhancement.
Abstract: This paper reports the results of our experimental studies on interaction of synthetic jets with a bulk flow over a heated surface in a low profile channel as well as its impact on overall heat transfer enhancement. Particle image velocimetry (PIV) studies were conducted to study the impact of jet frequency, mean flow velocity and synthetic jet orientation on the flow and heat transfer characteristics. The results showed an increase in jet velocity and hence heat transfer with an increase in frequency. Reduction in mean flow velocity facilitated entrainment of the synthetic jet and also led to its delayed dispersion. The thermal results were consistent with the flow patterns observed and heat transfer enhancements as high as 30% were observed. However, regions of attenuated heat transfer were also seen in cross-flow which needs careful consideration during thermal design.

Proceedings ArticleDOI
01 Dec 2007
TL;DR: In this paper, the effectiveness of different cooling techniques used for outdoor electronics was analyzed and compared in an outdoor electronic enclosure, including white oil paint on the outer surface, radiation shield, double-walled enclosure, fans for internal air circulation and air-to-air heat exchangers.
Abstract: The thermal management of an outdoor electronic enclosure can be quite challenging due to the additional thermal load from the sun and the requirement of having an air-sealed enclosure. It is essential to consider the effect of solar heating loads in the design process; otherwise, it can shorten the life expectancy of the electronic product or lead to catastrophic failure. The main objective of this work is to analyze and compare the effectiveness of different cooling techniques used for outdoor electronics. Various cooling techniques were compared like special coats and paints on the outer surface, radiation shield, double-walled enclosure, fans for internal air circulation and air-to-air heat exchangers. A highly simplified, typical outdoor system was selected for this study measuring approximately 300times300times400 mm (WxLxH). Solar radiation was incident on 3 sides of the enclosure. There were 8 equally spaced PCBs inside the enclosure dissipating 12.5 W each uniformly (100 watts total). A computational fluid dynamics (CFD) model of the system was built and analyzed. This was followed by building a mock-up of the system and conducting experiments to validate the CFD model. It was found that some of the simplest cooling techniques like white oil paint on the outer surface can significantly reduce the impact of solar loads. Adding internal circulation fans can also be very effective. Using air-to-air heat exchangers was found to be the most effective solution although it is more complex and costly.

Journal ArticleDOI
Pratap R Patnaik1
TL;DR: Neural and cybernetic models describe and optimize unsteady state fed-batch microbial reactors with finite dispersion more effectively than mechanistic models, but these "intelligent" models too have weaknesses, and hence a hybrid approach combining such models with some mechanistic features is suggested.
Abstract: Background For many microbial processes, the complexity of the metabolisms and the responses to transient and realistic conditions are difficult to capture in mechanistic models The cells seem to have an innate intelligence that enables them to respond optimally to environmental changes Some "intelligent" models have therefore been proposed and compared with a mechanistic model for fed-batch cultures of Ralstonia eutropha

Journal ArticleDOI
TL;DR: This work proposes dynamically modulating the padding based on criticality of memory requests, further extending ZettaRAM's energy advantage with negligible system slowdown and extracts energy savings from six otherwise uncompetitive molecules.
Abstract: ZettaRAMtrade is a nascent memory technology with roots in molecular electronics. It uses a conventional DRAM architecture except that the conventional capacitor is replaced with a new molecular capacitor. The molecular capacitor has a discrete threshold voltage, above which all molecules are charged and below which all molecules are discharged. Thus, while voltage still controls charging/discharging, the fixed charge deposited on the molecular capacitor is voltage-independent. Charge-voltage decoupling makes it possible to lower voltage from one memory generation to the next while still maintaining the minimum critical charge for reliable operation, whereas DRAM voltage scaling is constrained by charge. Voltage can be scaled inexpensively and reliably by engineering new, more favorable molecules. We analyze how three key molecule parameters influence voltage and then evaluate 23 molecules in the literature. Matching DRAM density and speed, the best molecule yields 61 percent energy savings. While the fixed charge is voltage-independent, speed is voltage-dependent. Thus, voltage is padded for competitive latency. We propose dynamically modulating the padding based on criticality of memory requests, further extending ZettaRAM's energy advantage with negligible system slowdown. Architectural management extends the best molecule's energy savings to 77 percent and extracts energy savings from six otherwise uncompetitive molecules

Proceedings ArticleDOI
29 Nov 2007
TL;DR: In this article, the authors report the design, development and characterization of a multi-beam reflective HOE for light concentration in a single focus, multiple exposed reflection element with wavelength selectivity.
Abstract: High efficiency solar cells are emerging as a viable tool for tapping solar energy. Such cells need cool light to be concentrated on them for high conversion efficiency. Conventional refractive and reflective light concentrators, with sun tracking systems, are bulky and heat the solar cell due to single focus convergence of heat and light radiation. HOEs are emerging as an active candidate for making new generation light concentrators. In this paper we report the design, development and characterization of a multi beam reflection HOE for light concentration. The proposed design of single focus, multiple exposed reflection element with wavelength selectivity can be used to avoid sun following system.

Proceedings ArticleDOI
10 Sep 2007
TL;DR: This paper for the first time integrates on-the-fly cache block compression/decompression algorithms in the cache coherence protocols by leveraging the directory structure already present in these scalable machines.
Abstract: Ever-increasing memory footprint of applications and increasing mainstream popularity of shared memory parallel computing motivate us to explore memory compression potential in distributed shared memory (DSM) multiprocessors. This paper for the first time integrates on-the-fly cache block compression/decompression algorithms in the cache coherence protocols by leveraging the directory structure already present in these scalable machines. Our proposal is unique in the sense that instead of employing custom compression/decompression hardware, we use a simple on-die protocol processing core in dual-core nodes for running our directory-based coherence protocol suitably extended with compression/decompression algorithms. We design a low-overhead compression scheme based on frequent patterns and zero runs present in the evicted dirty L2 cache blocks. Our compression algorithm examines the first eight bytes of an evicted dirty L2 block arriving at the home memory controller and speculates which compression scheme to invoke for the rest of the block. Our customized algorithm for handling completely zero cache blocks helps hide a significant amount of memory access latency. Our simulation-based experiments on a 16-node DSM multiprocessor with seven scientific computing applications show that our best design achieves, on average, 16% to 73% storage saving per evicted dirty L2 cache block for four out of the seven applications at the expense of at most 15% increased parallel execution time.

Proceedings ArticleDOI
Asankhaya Sharma1
04 Jun 2007
TL;DR: A new way towards ontology matching using the graph representation for ontologies and schemas is presented, which can be used as a quickly and dirty method to do initial matching of a large dataset and then drill down to the exact match with other algorithms.
Abstract: In this paper, we present a new way towards ontology matching. Using the graph representation for ontologies and schemas we proceed to calculate the weights for each node of the graph using the lexical similarity of the ancestors. With the guiding intuition that, if the parent nodes match then their children are likely to match as well. This simple observation helps on to build a fast and efficient algorithm for matching different graphs (which represent ontology or schema). Since the algorithm is very fast it can be used as a quickly and dirty method to do initial matching of a large dataset and then drill down to the exact match with other algorithms. The algorithm is not dependent on the method used for calculating the lexical similarity so the best lexical analysis can be used to derive the weights. Once the weights are in place we can calculate the matching in just a single traversal of the graphs. No other algorithm that we know of can give such fast response time.

Proceedings ArticleDOI
13 Dec 2007
TL;DR: An object- oriented ontology based e-procurement process in steel industry on the basis of the knowledge of intelligent negotiating agent as entities in an object oriented paradigm is developed.
Abstract: In e-commerce negotiating agents play important roles in the transaction between two or more business enterprises. So in e-business, agents are to be more intelligent, autonomous and they should have negotiating capability which requires the knowledge of underlying business logics. Besides raw data and abstracted information, the knowledge of an agent provides business intelligence, which in turn promotes smart business. In this paper we have developed an object- oriented ontology based e-procurement process in steel industry on the basis of the knowledge of intelligent negotiating agent as entities in an object oriented paradigm.

Proceedings ArticleDOI
TL;DR: In this paper, the use of a chemically etched tapered single mode fiber tip for enhancing lateral resolution in optical coherence tomography (OCT) was reported, without compromising the depth of imaging.
Abstract: We report the use of a chemically etched tapered single mode fiber tip for enhancing lateral resolution in optical coherence tomography (OCT). The important advantage of this approach is that high lateral resolution is achieved, without compromising the depth of imaging, as is the case with the use of high numerical aperture (NA) objectives. Use of the tapered tip in the sample arm of a single mode fiber based set-up allowed visualization of intracellular structures of Elodea densa plant leaf that could not be seen by the conventional OCT.

Journal ArticleDOI
TL;DR: In this paper, the Misawa model was used to describe the intermolecular structure and correlation of GeCl4, VCl4 and other tetrachloride liquids.

Proceedings ArticleDOI
V. Natarajan1
21 May 2007
TL;DR: In this paper, the authors investigated the effect of board conduction on the heat transfer performance from a vertically stacked (3D) electronic package (package-on-package) mounted in between two circuit boards.
Abstract: Convective heat transfer from a vertically stacked (3D) electronic package (package-on-package) mounted in between two circuit boards is numerically investigated. Heat transfer and pressure loss characteristics of single chip packages and multiple-chip packages (P-O-P) for both two and four package stack are presented and compared. The package Reynolds number, based on the package height and upstream velocity, ranges from 150 to about 1000. Two channel heights are investigated, a large channel (with bypass > 5B, B is package height) and a small channel (with a bypass = B). The effect of board conduction on the heat transfer performance from the package is also quantified. A surprising finding is that, in the present set of configurations, the heat transfer for a multi-package stack, such as a two-package stack or a four-package stack, can be reasonably predicted using the heat transfer information from a single chip package by employing a simple scalar factor. Finally, the effects of non-uniform heating on the die temperatures of the different packages in a stack are examined

Book ChapterDOI
01 Jan 2007
TL;DR: In this paper, an analytical model derived from energy theorem is coupled with the uncertainty associated with material properties and geometrical parameter to estimate fatigue life and crack growth at different level of probability and confidence level.
Abstract: The present paper describes an analytical model derived from energy theorem which is coupled with the uncertainty associated with material properties and geometrical parameter to estimate fatigue life and crack growth at different level of probability and confidence level The analysis of cracks within structure is an important application if the damage tolerance and durability of structures and components are to be predicted As part of the engineering design process, engineers have to assess not only how well the design satisfies the performance requirements but also how durable the product will be over its life cycle Often cracks cannot be avoided in structures; however the fatigue life of the structure depends on the location and size of these cracks In order to predict the fatigue life for any component, crack growth study needs to be performed


Proceedings ArticleDOI
09 Jul 2007
TL;DR: This paper proposes an alternate approach for achieving context replication indirectly, specially suited for telecom stacks, by replicating the incoming and outgoing messages at a stack level across the redundant setup.
Abstract: High availability in telecom system is achieved by having redundant setups of both hardware and software. Software level redundancy requires process context to be replicated across the redundant setup, so that other processes can take off from where the failed process left off. Several common ways of achieving the context replication is mentioned in various literatures and also presented in this paper mentioning their trade-offs. However, these approaches do not fare well when there are frequent context updates, which is typically the case of telecom and protocol stacks. This paper proposes an alternate approach for achieving context replication indirectly, specially suited for telecom stacks, by replicating the incoming and outgoing messages at a stack level across the redundant setup. As only the messages arriving at the stack are replicated, the overhead incurred by exchanging information whenever a context change takes place, is avoided resulting in superior performance.

Book ChapterDOI
09 Jul 2007
TL;DR: Modeling and deployment of e-contracts is a challenging task because of the involvement of both technological and business aspects and it is useful to have a system that models and enacts the evolution of e -contracts.
Abstract: Modeling and deployment of e-contracts is a challenging task because of the involvement of both technological and business aspects. There are several frameworks and systems available in the literature. Some works mainly deal with the automatic handling of paper contracts and others provide monitoring and enactment of contracts. Because contracts evolve, it is useful to have a system that models and enacts the evolution of e-contracts.