scispace - formally typeset
Search or ask a question

Showing papers in "Opsearch in 2008"


Journal ArticleDOI
01 Sep 2008-Opsearch
TL;DR: In this paper, a model for decision-making with respect to regularity during the design phase is proposed, highlighting important aspects to consider when deciding on what equipment to choose for achieving the regularity goals in the Arctic.
Abstract: During the last years there has been increasing interest in developing oil and gas fields in the Barents Sea north of the Polar circle, in the Arctic. However, there exists little experience and data with respect to operating in such a harsh climate, a sensitive environment and remote location. Hence, it is expected that one will face many challenges in developing offshore production facilities with respect to production regularity. The aim of this paper is to discuss production regularity for production facilities used in Arctic conditions and locations. Furthermore, we propose a model for decision-making with respect to regularity during the design phase. The model highlights important aspects to consider when deciding on what equipment to choose for achieving the regularity goals in the Arctic.

19 citations


Journal ArticleDOI
01 Jun 2008-Opsearch
TL;DR: A new Dinkelbach-type algorithm where the new iterate is determined using the information given by previous iterates and not only by the last one is introduced.
Abstract: In this paper we introduce a new Dinkelbach-type algorithm where the new iterate is determined using the information given by previous iterates and not only by the last one. The new algorithm is compared numerically with previous algorithms for generalized fractional programs.

17 citations


Journal ArticleDOI
01 Jun 2008-Opsearch
TL;DR: The explicit expressions for steady state distribution of the number of customers in the queue are obtained and the expected system length is derived for the three bulk size distributions: Deterministic, Geometric and Positive Poisson based on assumed numerical values given to the system parameters.
Abstract: This paper deals with the analysis of a two-phase MX/ EK /1 queuing system with N–Policy for exhaustive batch service with and without gating. Customers arrive in batches of random size according to a Poisson process and receive batch service in the first phase and individual service in the second phase. After providing the second phase of service to all the customers in the batch, the server returns to new customers who have arrived. If the customers are waiting, the server restarts the cycle by providing them batch service followed by individual service. In the absence of customers, the server takes a vacation and returns only after N customers join the queue to start the service. The explicit expressions for steady state distribution of the number of customers in the queue are obtained and also derived the expected system length. A cost model is developed to determine the optimum value of N. The expected system length is evaluated for the three bulk size distributions: Deterministic, Geometric and Positive Poisson based on assumed numerical values given to the system parameters. Sensitivity analysis is also investigated.

15 citations


Journal ArticleDOI
01 Dec 2008-Opsearch
TL;DR: A general framework for deriving several software reliability growth models with change-point concept based on non-homogeneous Poisson process (NHPP) is proposed and some existing change- point models along with three new models have been derived from the proposed general framework.
Abstract: Reliability of software often depends considerably on the quality of software testing. By assessing reliability we can also judge the quality of testing. Alternately, reliability estimation can be used to decide whether enough testing has been done. Hence, besides characterizing an important quality property of the product being delivered, reliability estimation has a direct role in project management-the reliability models being used by the project manager to decide when to stop testing Jalote [12]. A plethora of software reliability growth models (SRGM) have been developed during the last three decades. Various software development environment and assumptions have been incorporated during the development of these models. From our studies, many existing SRGM can be unified under a more general formulation. In fact, model unification is an insightful investigation for the study of general models without making many assumptions. In the literature various software reliability models have been proposed incorporating change-point concept. To the best of our knowledge these models have been developed separately. In this paper we propose a general framework for deriving several software reliability growth models with change-point concept based on non-homogeneous Poisson process (NHPP). Some existing change-point models along with three new models have been derived from the proposed general framework. The models derived have been validated and verified using real data sets. Estimated Parameters and comparison criteria results have also been presented.

10 citations


Journal ArticleDOI
01 Sep 2008-Opsearch
TL;DR: This paper presents two alternative quasi renewal processes based on the quasi-renewal process recently developed by Wang and Pham, which are used for developing the warranty cost models, reliability and other measures for k-out-of-n systems.
Abstract: In this paper, we present two alternative quasi renewal processes based on the quasi-renewal process recently developed by Wang and Pham [17]. The first alternative process is an altered quasi-renewal process with random parameter and the other is a mixed quasi-renewal process considering replacements and repairs. These mixed and altered quasi-renewal processes are used for developing the warranty cost models, reliability and other measures for k-out-of-n systems. A numerical example is discussed to demonstrate the applicability of the proposed methodology.

9 citations


Journal ArticleDOI
01 Dec 2008-Opsearch
TL;DR: A framework to classify testing effort models is proposed, aimed at identifying their commonalities and highlighting their differences, and of the limitations of the prevalent works in this domain.
Abstract: Explicitly relating the effectiveness of fault detection to the effort expended in testing, achieved by incorporating testing effort into software reliability models has been the focus of many research efforts. Although the literature is replete with these “testing effort models,” their development appears to be ad hoc and disconnected. The objective of this survey is to propose a framework to classify testing effort models, aimed at identifying their commonalities and highlighting their differences. We conclude the article with a brief discussion of the limitations of the prevalent works in this domain, which also identify directions for future research.

8 citations


Journal ArticleDOI
01 Mar 2008-Opsearch
TL;DR: In this paper, the authors assume that only the first and second moments of the probability distribution of lead time demand are known, and they assume that order quantity and backorder price discount are decision variables in their problem with mixture of backorders and lost sales.
Abstract: Lead time and setup cost are controllable variables in our continuous review inventory model. In this study we assume that only the first and second moments, of the probability distribution of lead time demand, are known. Order quantity and backorder price discount are decision variables in our problem with mixture of backorders and lost sales. As a result of reducing lead-time we can have significant savings and it is made known through numerical examples.

6 citations


Journal ArticleDOI
01 Sep 2008-Opsearch
TL;DR: The aim of this paper is to define maintainability importance measures in order to find the criticality of each component or subsystem from the maintainability point of view.
Abstract: Performance of a system depends upon its components. Some components have major influences on system reliability and maintainability than others. Hence, several component importance measures have been well defined and widely used in the reliability area. These importance measures enable the weakest and most critical areas of a system to be identified, and which should be considered modified to improve the production plant performance. The aim of this paper is to define maintainability importance measures in order to find the criticality of each component or subsystem from the maintainability point of view. Such importance measures should be useful for resources allocation to improve production plant performance in both the design and operation phases.

5 citations


Journal ArticleDOI
01 Mar 2008-Opsearch
TL;DR: Optimal operating policy is achieved under a linear cost structure and a sensitivity analysis is presented through numerical illustrations, and various system measures and stochastic decomposition property are obtained using generating functions.
Abstract: This paper analyses the modeling of a production system, which is designed as an N-policy M[x]/M/1 queueing system with a removable and non-reliable server. The server spends a random period for startup procedure before each new service. The server does not start the production until some specified number of raw materials ‘N’ are accumulated in the queue and it stays idle when there is no input unit to process. The server is susceptible for random breakdown and in such a case it is repaired at once and it resumes the service. The units are assumed to arrive in batches of random size. Various system measures and stochastic decomposition property are obtained using generating functions. Optimal operating policy is achieved under a linear cost structure and a sensitivity analysis is presented through numerical illustrations.

4 citations


Journal ArticleDOI
01 Sep 2008-Opsearch
TL;DR: This paper has formulated an optimization problem for determining the optimal time at which the software testing is stopped and system is ready for use in operational phase with the prime objective of minimizing risk cost subject to budget constraints and failure intensity.
Abstract: Critical systems exist all around us, from nuclear power plants to chemical processing plants to heart monitors and emergency phone systems etc. As software and processors encompass critical systems, the risk involved because of software failure is unimaginable. The main emphasis of software industries developing these systems is to put a great deal of deliberation and thought into making these systems as safe as possible. Safety is a nebulous concept, and is therefore difficult to define or measure. In our paper we measure safety in the form of risk and the costs associated with it. We have formulated an optimization problem for determining the optimal time at which the software testing is stopped and system is ready for use in operational phase with the prime objective of minimizing risk cost subject to budget constraints and failure intensity. We have also considered uncertainty and ambiguities in the definition of cost function and risk function coefficients, available budget, failure intensity due to intense competition in the global market, varying requirements of the client, rapid evolution of information technology, system complexity, intended flexibility, poor data base to name a few. For this we have defined the constrained optimization problem under fuzzy environment. Finally we have discussed fuzzy optimization technique for solving the problem with a help of numerical illustration.

4 citations


Journal ArticleDOI
01 Mar 2008-Opsearch
TL;DR: The methodology is applied to the Chance Constrained model where the constraints have two different types of fuzzy inequalities and the method is justified through numerical examples.
Abstract: This paper deals with a methodology for solving a Chance Constrained Fuzzy linear programming problem. The methodology is applied to the Chance Constrained model where the constraints have two different types of fuzzy inequalities and the method is justified through numerical examples.

Journal ArticleDOI
01 Dec 2008-Opsearch
TL;DR: Experiences in developing applications in Java Enterprise Edition (JEE) with customized RUP have been presented and a basis to achieve increased reliability qualitatively with higher productivity and lower defect density along with competitiveness through cost effective custom software solutions of an application is provided.
Abstract: In a competitive business landscape, large organizations such as insurance companies and banks are under high pressure to innovate, improvise and distinguish their products and services while continuing to reduce the time-to market for new product introductions. Generating a single view of the customer is vital from different perspectives of the systems developer over a period of time because of the existence of disconnected systems within an enterprise. Therefore, to increase revenues and cost optimization, it is important to build enterprise systems more closely with the business requirements by reusing the existing systems. While building distributed based applications, it is important to take into account the proven processes like Rational Unified Process (RUP) to mitigate risks and increase the reliability of systems. Experiences in developing applications in Java Enterprise Edition (JEE) with customized RUP have been presented in this paper. RUP is adopted into an onsite-offshore development model along with ISO 9001 and SEI CMM Level 5 standards. This paper provides a basis to achieve increased reliability qualitatively with higher productivity and lower defect density along with competitiveness through cost effective custom software solutions of an application. Qualitative reliability is obtained, which is the expected number of defects in the software obtained from the PoC (Proof-of–Concept) through the RUP implemented prototype. Based on the prototype, the critical parameter(s) affecting the QoS is then estimated using Analytical Network Process (ANP) prior to actual implementation of the application development.

Journal ArticleDOI
01 Jun 2008-Opsearch
TL;DR: An attempt is made to extend the software cost model proposed by William et el.[1] by considering the cost factors such as test effort, cost of imperfect rectification of errors and life time warranty cost.
Abstract: The optimum release time or total testing time of a software product subject to the desired quality and total testing cost is an important issue. In this paper an attempt is made to extend the software cost model proposed by William et el.[1] by considering the cost factors such as test effort, cost of imperfect rectification of errors and life time warranty cost. The cost of software testing is the sum of the cost incurred due to initial testing cost, cost of testing the software per unit time and warranty cost. For this model the optimum release time and optimum release policy are proposed by minimizing the cost function subject to the desired reliability levels under the situation: (i) when warranty is provided to retain the reliability level promised at the time of software release (ii) when warranty is provided to increase the reliability level from the time of software release.

Journal ArticleDOI
01 Mar 2008-Opsearch
TL;DR: An algorithm for solving a multi-level programming problem using a linear pre-emptive goal programming model that provides the preferred values of the decision variables under his control and the target value of his objective function to the next level DM to formulate a goal programming problem equivalent to the given multi- level programming problem.
Abstract: This paper presents an algorithm for solving a multi-level programming problem using a linear pre-emptive goal programming model. The higher level decision maker (DM) provides the preferred values of the decision variables under his control and the target value of his objective function to the next level DM to formulate a goal programming problem equivalent to the given multi-level programming problem. It is illustrated with the help of an example of a tri-level programming problem.

Journal ArticleDOI
01 Sep 2008-Opsearch
TL;DR: A mathematical formulation has been derived for the discard rate of aircrafts based on failure rate, mission life and remaining life of the aircrafts in the fleet that helps in managing demand rate of the units during the phase out of the Aircraft fleet.
Abstract: Maintenance decisions for the repairable units (LRU’s) for an aircraft fleet are needs to be considered carefully while phasing out of an aircraft fleet in terms of cost effectiveness and fleet availability. Discard rate and phasing out period for an aircraft are the critical parameters for determining optimum time to stop the maintenance. The remaining economic value of useful life of an aircraft fleet should be taken into consideration by salvaging the LRU’s at the end of phasing out. These units can often be utilized for the aircraft staying in operation and this can influence the maintenance strategy for the units and the aircraft fleet. By salvaging units with remaining service life from retired aircraft, the relative stock level of units compared to operational aircraft will increase. This will give an opportunity to modify the maintenance strategy due to the increased number of units in the stock; units are discarded instead of being maintained which is a cost-effective strategy. In this paper a methodology has been suggested to optimize the availability of repairable units at a lowest life cycle cost, in order to decide at which point further maintenance can safely be stopped, and maintenance resources should be discarded. A mathematical formulation has been derived for the discard rate of aircrafts based on failure rate, mission life and remaining life of the aircrafts in the fleet that helps in managing demand rate of the units during the phase out of the aircraft fleet.

Journal ArticleDOI
01 Sep 2008-Opsearch
TL;DR: An approach to integrate ship fire safety assessment and decision-making using the Analytical Hierarchy Process (AHP) method is developed, which utilises the AHP theory to rank the fire events and further integrates the available control options within the analysis.
Abstract: Ship fire safety is increasingly attracting attention from both researchers and engineers. This paper develops an approach to integrate ship fire safety assessment and decision-making using the Analytical Hierarchy Process (AHP) method. The approach can be used to help reduce the probability of fire occurrence and severity of possible consequences during the operational phase of a passenger ship. It utilises the AHP theory to rank the fire events and further integrates the available control options (to minimise these fires) within the analysis. A test case on the operation of a passenger ship is used to demonstrate the approach.

Journal ArticleDOI
01 Dec 2008-Opsearch
TL;DR: A multivariate analysis is conducted by using process measurement data, and a relational expression based on statistically significant factors is derived, which can quantitatively predict final product quality/reliability.
Abstract: Software development productivity and product quality are related to quality of the software development process. Therefore, if we can improve quality of software development process based on project management technologies, software development productivity and product quality will be increased. In this paper, we conduct a multivariate analysis by using process measurement data, and derive a relational expression based on statistically significant factors, which can quantitatively predict final product quality/reliability. Furthermore, we apply a method of collaborative filtering by using process measurement data to predict final product quality from the similarity of software projects. Finally, we compare the results of two methods, i.e., multiple regression analysis and collaborative filtering, in terms of predictive accuracy of final product quality/reliability.

Journal ArticleDOI
01 Mar 2008-Opsearch
TL;DR: In this article, a blocked, quadratic regression model with a predictor variable is presented, which does not allow direct measurement but may be estimated from other observations, and a solution method is outlined to find the estimated values of the model parameters.
Abstract: We present a blocked, quadratic regression model in this article. The model has a predictor variable, which does not allow direct measurement but may be estimated from other observations. A solution method is outlined to find the estimated values of the model parameters and such a predictor variable. The model has substantial scope of application. Such an application, in business school ranking, is discussed.

Journal ArticleDOI
01 Dec 2008-Opsearch
TL;DR: In this article, an empirical Bayesian software reliability model is considered, where the times between failures follow Rayleigh distribution with the parameter in the failure rate function with stochastically decreasing order on successive failure time intervals.
Abstract: An Empirical Bayesian software reliability model is considered in this paper. It is assumed that the times between failures follow Rayleigh distribution with the parameter in the failure rate function with stochastically decreasing order on successive failure time intervals. The reasoning for the assumption on the parameter is that the intention of the software tester to improve the software quality by the correction of each failure. With the Bayesian approach, the predictive distribution has been arrived at by combining Rayleigh time between failures and gamma prior distribution for the parameter. The expected time between failure measures has been obtained. The posterior distribution of the parameter and its mean has been deduced. For the parameter estimation, Maximum likelihood estimation (MLE) method has been adopted. The proposed model has been applied to two sets of actual software failure data and it has been observed that the predicted failure times as per the proposed model are closer to the actual failure times. The predicted failure times based on Littlewood-Verall (LV) model is also computed. Sum of Square Errors (SSE) criteria has been used for comparing the actual time between failures and predicted time between failures based on proposed model and LV model.

Journal ArticleDOI
01 Jun 2008-Opsearch
TL;DR: A branch and bound algorithm for the partial coverage capacitated facility locations problem is developed and the case when the demand of the customer may not be satisfied completely, giving rise to “opportunity demand” is discussed.
Abstract: In this paper, a branch and bound algorithm for the partial coverage capacitated facility locations problem is developed. We have to fix open a set of warehouses which are economically feasible. It is an extension of the algorithm given by B. M. Khumawala [10] and Sudha Arora and S.R. Arora [2]. The case when the demand of the customer may not be satisfied completely, giving rise to “opportunity demand” is also discussed. This is illustrated with the help of examples and their computational results.

Journal ArticleDOI
01 Sep 2008-Opsearch
TL;DR: In this paper, an approximate explicit method from literature is modified to incorporate additional variables affecting diffusion rate, which accounts for uncertainties of input parameters and predict expected time of first corrosion for the chosen risk of corrosion.
Abstract: Corrosion initiation time of steel reinforcement in partially saturated concrete member subjected to chloride ingress is investigated at five geographic locations along Indian coasts. An approximate explicit method from literature is modified to incorporate additional variables affecting diffusion rate. The method accounts for uncertainties of input parameters and predict expected time of first corrosion for the chosen risk of corrosion. Method is also utilized to study the sensitivity of the parameters to reinforcement corrosion. Previously proposed diffusion based chloride ingress model is used for the analysis of time to initiate corrosion (corrosion initiation time). Corrosion is initiated when the chloride concentration on steel reinforcement exceeds a threshold value. Considerable variation in corrosion initiation time is observed for same concrete structure at different geographic locations. Life-365 predicts the time to corrosion initiation considering full-saturated condition of concrete. Comparing the results of the analysis for partially saturated and fully saturated concrete, it was found that Life-365 underestimates the time to corrosion initiation. Corrosion initiation time in ascending order was found at places Colaba, Kanyakumari, Santacruz, Chennai and Vishakhapatnam. Knowledge of corrosion initiation time is useful for owner, designer, or to an organization to take decision about repair strategy and prioritize repair of structures for corrosion protection in order to optimize maintenance planning and budgeting, since planned maintenance at the optimum time is the safest and most cost effective approach.

Journal ArticleDOI
01 Dec 2008-Opsearch
TL;DR: This paper proposes a new approach to software reliability assessment by creating a fusion of neural network and stochastic differential equations based on component importance levels and compares the goodness-of-fit of the proposed models with the conventional software reliability growth model for OSS.
Abstract: Network technologies become increasingly more complex in a wide sphere. Especially, open source software systems which serve as key components of critical infrastructures in the society are still ever-expanding now. In this paper, we propose a new approach to software reliability assessment by creating a fusion of neural network and stochastic differential equations based on component importance levels. Also, we analyze actual software fault-count data to show numerical examples of software reliability assessment considering component importance levels for an open source software. Moreover, we compare the goodness-of-fit of the proposed models with the conventional software reliability growth model for OSS.

Journal ArticleDOI
01 Dec 2008-Opsearch
TL;DR: This framework calibrates test data with field observations, and thus forms a close-loop approach to evaluate the reliability and availability of the software product to verify that the product meet specific reliability expectation.
Abstract: Traditional software deployment readiness criteria, such as “zero severity one defects”, do not provide any indication of how reliable the product will be in the field. In this paper, we propose a software reliability prediction framework to achieve data-driven, customer focused reliability and availability assessment throughout the entire development life cycle. Focusing on front-end reliability and availability improvement, the framework starts with availability evaluations as early as the architecture design phases. Markov-based architecture reliability models are used to study the failure and failure recovery mechanisms of the systems and solutions. These early evaluations can help architecture design, reliability requirement setting and reliability budget allocation. The early phase models and predictions can be updated as testing data becomes available. Software reliability growth models (SRGMs) are used to estimate one of the most influential parameters, i.e., the failure rates of software. Estimation of other reliability parameters, such as coverage factor, silent failure detection times and recovery durations and success probabilities are also discussed in this paper. This framework also calibrates test data with field observations, and thus forms a close-loop approach to evaluate the reliability and availability of the software product to verify that the product meet specific reliability expectation.

Journal ArticleDOI
01 Jun 2008-Opsearch
TL;DR: An algorithm is proposed to find an a-Pareto Optimal Solution and the corresponding Stability Set of First Kind and some basic stability notions are defined and characterized for the problem of concern.
Abstract: This paper deals with Bicriterion mathematical programming problems with fuzzy numbers in the two objectives and free parameters in the right hand side of the constraints. An algorithm is proposed to find an a-Pareto Optimal Solution and the corresponding Stability Set of First Kind. Some basic stability notions are defined and characterized for the problem of concern. Finally, an illustrative nonlinear numerical example is given to clarify the algorithm.

Journal ArticleDOI
01 Sep 2008-Opsearch
TL;DR: An approach to assess the health of the bulk power system by incorporating the fuzzy sets by including fuzzy set for the assessment of wellbeing of a composite power system in the adequacy domain is provided.
Abstract: With increased energy demand, less new transmission, and open access, the power system is experiencing a much greater level of power transfer. These new requirements push the system to its limits for maximum economic benefit, while maintaining sufficient security margins that require network analysis. A practical interconnected system can collapse due to a number of different limits being exceeded such as thermal and operating reserve. Usually probabilistic methods are used in the conventional reliability assessment. The large amount of uncertainty is implicit in the estimate of system reliability because of insufficient failure data and variation in environmental conditions. In practice, limits are imposed by the operators on power system parameters, like line flows and bus voltages are crisps when dealing the deterministic technique, but in real, these limits are no longer of a crisp nature and are considered as soft constraints. The reliability parameters such as failure and repair rates used in the probabilistic models basically come from historical operation records, and leads to considerable data uncertainty. In this paper an approach to assess the health of the bulk power system by incorporating the fuzzy sets is suggested. To deal with the issue of large number of contingencies, a fuzzy logic based ranking of outages is also illustrated. This paper provides an approach to extend the conventional probabilistic reliability analysis by including fuzzy set for the assessment of wellbeing of a composite power system in the adequacy domain.

Journal ArticleDOI
01 Jun 2008-Opsearch
TL;DR: An interesting generalization of the model M/((H2+U) /2)2 has been presented, where (H2-U) is a mixture of two distributions, one hyper-gamma and second uniform continuous so that the customer gets faster and better service.
Abstract: The present chapter deals initially with M/ (M+U)/1)2. By allowing one of the two servers to have a continuous uniform service time distribution, we make our model much realistic. We consider various strategies for a smart customer. Finally an interesting generalization of it in the form M/((H2+U) /2)2has been presented, where (H2+U) is a mixture of two distributions, one hyper-gamma and second uniform continuous so that the customer gets faster and better service.

Journal ArticleDOI
01 Mar 2008-Opsearch
TL;DR: The characteristics of Skiplot Sampling Plans (SkSP) originally developed by Dodge and Perry are reconsidered in this paper from a computational point of view and it is shown that an admissible reference plan can always be generated with Excel functions and templates are developed to obtain the characteristics ofSkSP.
Abstract: The characteristics of Skiplot Sampling Plans (SkSP) originally developed by Dodge and Perry are reconsidered in this paper from a computational point of view. The reference plan plays an important role in the performance of SkSP. A Single Sampling Plan (SSP) is normally used as reference plan and a number of procedures are available to determine the SSP. We focus on the algorithm-based procedures instead of methods based on statistical tables to determine the SSP and its effect on the performance indicators of SkSP. Spreadsheet solutions are nowadays more user-friendly than customized programs written in specific languages. We present a case of using Excel worksheet functions to handle statistical distributions required in the determination of the plan., The performance of SkSP is compared by generating a SSP as reference plan using algorithms due to i) Guenther, ii) Modified Graf et al. It is shown that an admissible reference plan can always be generated with Excel functions and templates are developed to obtain the characteristics of SkSP.