scispace - formally typeset
Search or ask a question

Showing papers in "Asia-Pacific Journal of Operational Research in 2008"


Journal ArticleDOI
TL;DR: A review of the advances in inventory literature under conditions of permissible delay in payments since 1985 is presented to bring out pertinent information regarding model developments in the past two decades.
Abstract: Since the publication of the Goyal model in 1985, research on the modeling of inventory lot-size under trade credits has resulted in a body of literature. In this paper, we present a review of the advances in inventory literature under conditions of permissible delay in payments since 1985. We classify all related previous articles into five categories based on: (a) without deterioration, (b) with deterioration, (c) with allowable shortage, (d) linked to order quantity, and (e) with inflation. The motivations, extensions and weaknesses of various previous models have been discussed in brief to bring out pertinent information regarding model developments in the past two decades.

108 citations


Journal ArticleDOI
TL;DR: A new improved weighted additive model for solving fuzzy goal programming problems is introduced and the relationships between the new model and some of the existing models are discussed and proved.
Abstract: Weighted additive models are well known for dealing with multiple criteria decision making problems. Fuzzy goal programming is a branch of multiple criteria decision making which has been applied to solve real life problems. Several weighted additive models are introduced to handle fuzzy goal programming problems. These models are based on two approaches in fuzzy goal programming namely goal programming and fuzzy programming techniques. However, some of these models are not able to solve all kinds of fuzzy goal programming problems and some of them that appear in current literature suffer from a lack of precision in their formulations. This paper focuses on weighed additive models for fuzzy goal programming. It explains the oversights within some of them and proposes the necessary corrections. A new improved weighted additive model for solving fuzzy goal programming problems is introduced. The relationships between the new model and some of the existing models are discussed and proved. A numerical example is given to demonstrate the validity and strengths of the new model.

50 citations


Journal ArticleDOI
TL;DR: This work develops an interactive fuzzy linear programming (FLP) method for solving TPD problems with fuzzy goals, available supply and forecast demand and provides a systematic framework that facilitates the DM interactively to modify the imprecise data and related parameters until a satisfactory solution is derived.
Abstract: In most real-world situations for transportation planning decision (TPD) problems, environmental coefficients and parameters are imprecise/fuzzy in nature, and the decision maker (DM) generally faces a multi-objective TPD problem in a fuzzy environment. This work develops an interactive fuzzy linear programming (FLP) method for solving TPD problems with fuzzy goals, available supply and forecast demand. The proposed method attempts simultaneously to minimize the total production and transportation costs and the total delivery time with reference to available supply, machine capacities and budget constraints at each source, as well as forecast demand and warehouse space constraints at each destination. In addition, the proposed method provides a systematic framework that facilitates the DM interactively to modify the imprecise data and related parameters until a satisfactory solution is derived. An industrial case is used to demonstrate the feasibility of applying the proposed method to real-world TPD problems. Especially, several significant characteristics of the proposed FLP method are presented in contrast to those of the main TPD methods.

46 citations


Journal ArticleDOI
TL;DR: This work evaluates seven specially selected direct methods of estimating priority vectors from reciprocal pairwise comparison matrices under four effectiveness measures to suggest that the geometric mean method is in general the best method with the normalized column mean method as a close second best.
Abstract: Pairwise comparison is commonly used to estimate the priority values of finite alternatives with respect to a given criterion. We evaluate seven specially selected direct methods of estimating priority vectors from reciprocal pairwise comparison matrices under four effectiveness measures. A simulation experiment is designed starting with true priority vectors that represent difficult cases of "no obvious best alternative" and "two equal best alternatives". The simulation results suggest that the geometric mean method is in general the best method with the normalized column mean method as a close second best.

40 citations


Journal ArticleDOI
TL;DR: The distribution of papers published by Asian authors in Operations Research and Management Science journals from 1968 to 2006 is evaluated, and the impact of OR/MS research in Asia is compared with that of the United States and the World, and research trends are highlighted through an analysis of keywords.
Abstract: This paper evaluates the distribution of papers published by Asian authors in Operations Research and Management Science (OR/MS) journals from 1968 to 2006. The impact of OR/MS research in Asia is ...

25 citations


Journal ArticleDOI
TL;DR: This paper considers the single machine scheduling problem with linear earliness and quadratic tardiness costs, and no machine idle time, and proposes a lower bounding procedure based on the relaxation of the jobs' completion times, which incorporates the proposed lower bound.
Abstract: In this paper, we consider the single machine scheduling problem with linear earliness and quadratic tardiness costs, and no machine idle time. We propose a lower bounding procedure based on the relaxation of the jobs' completion times. Optimal branch-and-bound algorithms are then presented. These algorithms incorporate the proposed lower bound, as well as an insertion-based dominance test. The branch-and-bound procedures are tested on a wide set of randomly generated problems. The computational results show that the branch-and-bound algorithms are capable of optimally solving, within reasonable computation times, instances with up to 20 jobs.

19 citations


Journal ArticleDOI
TL;DR: This research analyzes a kitting system where multiple components are grouped to form a predefined kit prior to assembly, and several easily computable bounds on system throughput are identified.
Abstract: This research analyzes a kitting system where multiple components are grouped to form a predefined kit prior to assembly. The kitting system is modeled as a fork/join synchronization station and component supply is assumed to be from fabrication facilities operating under a kanban control policy. Exact analysis of these systems is computationally intensive even under Markovian assumptions. To evaluate the impact of input parameters on kitting system performance, several easily computable bounds on system throughput are first identified. Using these bounds, closed form approximate expressions for system throughput are derived. The throughput estimate is used to compute other performance measures of interest such as mean queue length and mean waiting time in the system. The accuracy of the approximations is validated using numerical experiments and some performance insights are given.

17 citations


Journal ArticleDOI
TL;DR: A genetic algorithm-based heuristic (GA) is presented and compared with random search solution and mutually consistent solution (MC) using numerical example and results show that the GA approach is efficient and the values of the performance index were significantly improved relative to the MC.
Abstract: This paper addresses the multi-period two-echelon integrated competitive/uncompetitive facility location problem in a distribution system design that involves locating regional distribution centers (RDCs) and stores, and determining the best strategy for distributing the commodities from a central distribution center (CDC) to RDCs and from RDCs to stores. The goal is to determine the optimal numbers, locations and capacities of RDCs and stores so as to maximize the total profit of the distribution system. Unlike most of past research, our study allows for dynamic planning horizon, distribution of commodities, configuration of two-echelon facilities, availability of capital for investment, external market competition, customer choice behavior and storage limitation. This problem is formulated as a bi-level programming model and a mutually consistent programming mode, respectively. Since such a distribution system design problem belongs to a class of NP-hard problem, a genetic algorithm-based heuristic (GA) is presented and compared with random search solution and mutually consistent solution (MC) using numerical example. The computational results show that the GA approach is efficient and the values of the performance index were significantly improved relative to the MC.

17 citations


Journal ArticleDOI
TL;DR: This paper investigates the impact of learning curve effect on setup cost for the continuous review inventory model involving controllable lead time with the mixture of backorder and partial lost sales.
Abstract: This paper investigates the impact of learning curve effect on setup cost for the continuous review inventory model involving controllable lead time with the mixture of backorder and partial lost sales. A learning curve is a well known tool which describes the relation between the performance of a task and the number of repetitions of that task. The objective of this study is to minimize the expected total annual cost by simultaneously optimizing order quantity, safety factor and lead time under different setup learning rates. There are two models considered in the paper, one with normal demand distribution and another with general demand distribution having both mean and variance known and finite. Numerical examples are presented to illustrate the procedures of the proposed solution algorithms, along with the savings on the total costs of the models with the inclusion of the learning effect on setup.

16 citations


Journal ArticleDOI
TL;DR: The case with a fixed number of forbidden intervals can be solved by a pseudo-polynomial time algorithm, while no polynomial time approximation algorithm with afixed performance ratio exists for the case with two forbidden intervals.
Abstract: We consider a non-preemptive single machine scheduling problem with forbidden intervals. Associated with each job is a given processing time and a delivery time to its customer, when the processing of the job is complete. The objective is to minimize the time taken for all the jobs to be delivered to the customers. The problem is strongly NP-hard in general. In this study, we show that the case with a fixed number of forbidden intervals can be solved by a pseudo-polynomial time algorithm, while no polynomial time approximation algorithm with a fixed performance ratio exists for the case with two forbidden intervals. We also develop a polynomial time approximation scheme (PTAS) for the case with a single forbidden interval.

15 citations


Journal ArticleDOI
TL;DR: The empirical results show that VaR and CVaR, especially their combinations with traditional risk measures, are very helpful for comprehensively describing return distribution properties such as skewness and leptokurtosis, and can thus better evaluate the overall performance of mutual funds.
Abstract: The data envelopment analysis (DEA) method is a mathematical programming approach to evaluate the relative performance of portfolios. Considering that the risk input indicators of existing DEA performance evaluation indices cannot reflect the pervasive fat tails and asymmetry in return distributions of mutual funds, we originally introduce new risk measures CVaR and VaR into inputs of relevant DEA indices to measure relative performance of portfolios more objectively. To fairly evaluate the performance variation of the same fund during different time periods, we creatively treat them as different decision making units (DMUs). Different from available DEA applications which mainly investigate the American mutual fund performance from the whole market or industry aspect, we analyze in detail the effect of different input/output indicator combinations on the performance of individual funds. Our empirical results show that VaR and CVaR, especially their combinations with traditional risk measures, are very helpful for comprehensively describing return distribution properties such as skewness and leptokurtosis, and can thus better evaluate the overall performance of mutual funds.

Journal ArticleDOI
TL;DR: A new heuristic is provided which has the best possible worst-case performance ratio $\frac{3}{2}$, which is defined as the maximum transportation time of the jobs contained in it.
Abstract: In the single machine scheduling problem with job delivery to minimize makespan, jobs are processed on a single machine and delivered by a capacitated vehicle to their respective customers. We first consider the special case with a single customer, that is, all jobs have the same transportation time. Chang and Lee (2004) proved that this case is strongly NP-hard. They also provided a heuristic with the worst-case performance ratio , and pointed out that no heuristic can have a worst-case performance ratio less than unless P = NP. In this paper, we provide a new heuristic which has the best possible worst-case performance ratio . We also consider an extended version in which the jobs have non-identical transportation times and the transportation time of a delivery batch is defined as the maximum transportation time of the jobs contained in it. We provide a heuristic with the worst-case performance ratio 2 for the extended version, and show that this bound is tight.

Journal ArticleDOI
TL;DR: This investigation elucidates the feasibility of monitoring a process for which observational data are largely autocorrelated and extends the EWMA control chart to monitor a process in which the observations can be regarded as a first-order autoregressive process with a random error.
Abstract: This investigation elucidates the feasibility of monitoring a process for which observational data are largely autocorrelated. Special causes typically affect not only the process mean but also the process variance. The EWMA control chart has recently been developed and adopted to detect small shifts in the process mean and/or variance. This work extends the EWMA control chart, called the generally weighted moving average (GWMA) control chart, to monitor a process in which the observations can be regarded as a first-order autoregressive process with a random error. The EWMA and GWMA control charts of residuals used to monitor process variability and to monitor simultaneously the process mean and variance are considered to evaluate how average run lengths (ARLs) differ in each case.

Journal ArticleDOI
TL;DR: Two new metaheuristic approaches for the leaf-constrained minimum spanning tree problem are proposed, one is an ant-colony optimization (ACO) algorithm, whereas the other is a tabu search based algorithm.
Abstract: Given an undirected, connected, weighted graph, the leaf-constrained minimum spanning tree (LCMST) problem seeks a spanning tree of the graph with smallest weight among all spanning trees of the graph, which contains at least l leaves. In this paper we have proposed two new metaheuristic approaches for the LCMST problem. One is an ant-colony optimization (ACO) algorithm, whereas the other is a tabu search based algorithm. Similar to a previously proposed genetic algorithm, these metaheuristic approaches also use the subset coding that represents a leaf-constrained spanning tree by the set of its interior vertices. Our new approaches perform well in comparison with two best heuristics reported in the literature for the problem — the subset-coded genetic algorithm and a greedy heuristic.

Journal ArticleDOI
TL;DR: A single item, single cycle economic production quantity model for perishable products is proposed where the demand is two-component and stock dependent and the profit function is formulated under the assumption that the time period of the festival seasons is fixed.
Abstract: A single item, single cycle economic production quantity model for perishable products is proposed where the demand is two-component and stock dependent The production inventory scenario of products like cake, bread, fast foods, fishes, garments, cosmetics etc in the festival season is considered The profit function is formulated under the assumption that the time period of the festival seasons is fixed and the display capacity of the produced item is limited In the formulation process, to introduce more flexibility, a goal programming technique is incorporated to achieve the producer's desired profit and stock of as much inventory as possible below the display capacity level A numerical example is presented to illustrate the proposed model A sensitivity analysis of the model is also carried out

Journal ArticleDOI
TL;DR: This paper proposes a method for finding the reference set of a decision making unit (DMU), without chasing down all alternative optimal solutions of the envelopment form, which is a strong degenerate problem.
Abstract: In this paper we propose a method for finding the reference set of a decision making unit (DMU), without chasing down all alternative optimal solutions of the envelopment form, which is a strong degenerate problem. The reference set is useful as a benchmark for an inefficient DMU, for identifying the status of returns to scale, ranking of DMUs and so on. Lastly, numerical examples are shown to illustrate our proposed approach.

Journal ArticleDOI
TL;DR: The convergence theorem shows that the dual algorithm based on any nonlinear Lagrangian in the class is locally convergent when the penalty parameter is less than a threshold under a set of suitable conditions on problem functions and the error bound solution, depending on the penalty parameters, is established.
Abstract: This paper establishes a theory framework of a class of nonlinear Lagrangians for solving nonlinear programming problems with inequality constraints. A set of conditions are proposed to guarantee the convergence of nonlinear Lagrangian algorithms, to analyze condition numbers of nonlinear Lagrangian Hessians as well as to develop the dual approaches. These conditions are satisfied by well-known nonlinear Lagrangians appearing in literature. The convergence theorem shows that the dual algorithm based on any nonlinear Lagrangian in the class is locally convergent when the penalty parameter is less than a threshold under a set of suitable conditions on problem functions and the error bound solution, depending on the penalty parameter, is also established. The paper also develops the dual problems based on the proposed nonlinear Lagrangians, and the related duality theorem and saddle point theorem are demonstrated. Furthermore, it is shown that the condition numbers of Lagrangian Hessians at optimal solutions are proportional to the controlling penalty parameters. We report some numerical results obtained by using nonlinear Lagrangians.

Journal ArticleDOI
TL;DR: It is shown that each of all solutions that admit a potential can be obtained by applying the weighted associated consistent value proposed in this paper to an appropriately modified game.
Abstract: This paper is devoted to the study of solutions for multi-choice transferable-utility (TU) games which admit a potential, such as the potential associated with a solution in the context of multi-choice TU games. Several axiomatizations of the family of all solutions that admit a potential are offered and, as a main result, it is shown that each of these solutions can be obtained by applying the weighted associated consistent value proposed in this paper to an appropriately modified game. We also characterize the weighted associated consistent value by means of the weighted balanced contributions and the associated consistency.

Journal ArticleDOI
TL;DR: The method for solving all four standard technologies (VRS, CRS, NDRS and NIRS) by a numeration algorithm without using LP or MILP regular solving methods is extended.
Abstract: Production Possibility Set (PPS) based on Free Disposal Hull assumption describes the minimum PPS for evaluating efficiency of DMUs and presents one reference for each unit. Tulkens (Journal of Productivity Analysis, 4(1), 183–210) proposed a mathematical program and a procedure for solving FDH model that can be used for only VRS technology. In this paper, we extend the method for solving all four standard technologies (VRS, CRS, NDRS and NIRS) by a numeration algorithm without using LP or MILP regular solving methods.

Journal ArticleDOI
TL;DR: Two-moment approximation formulas are presented for performance modelling of multi-server systems involving servers of 2,3,…,10 servers and applications to optimizing manufacturing and service systems using a marginal allocation algorithm are briefly illustrated.
Abstract: Multi-server, finite buffer, performance models of queueing systems are very useful tools for manufacturing, telecommunication, transportation and facility modelling applications. Exact computation of performance measures for general service multi-server queueing systems remains an intractable problem. Approximations of these performance measures are important to quickly and accurately reveal the performance of a system. This is desirable for both performance evaluation as well as optimization of these systems. Two-moment approximation formulas are presented for performance modelling of multi-server systems involving servers of 2,3,…,10 servers. Extensive computational results are provided to evaluate the approximation results against simulation, known tabular results, and other approximation formulas. Applications of the model to optimizing manufacturing and service systems using a marginal allocation algorithm are briefly illustrated. Extensions of the two-moment methodology to larger multi-server systems c = {25, 50, 100} round out the paper.

Journal ArticleDOI
TL;DR: The evidence shows that the proposed GANNRS is more efficient in computation, and the results from the objectives are appealing.
Abstract: To optimize the design of reliability systems, an analyst is frequently faced with the demand of achieving several targets (i.e., maximization of system reliability, minimizations of cost, volume, and weight), some of which may be in conflict with each other. This paper presents a novel hybrid approach, combining a multi-objective genetic algorithm and a neural network, for multi-objective optimization of a reliability system, namely GANNRS (Genetic Algorithm and Neural Network for Reliability System optimization). The multi-objective genetic algorithm's evolutionary strategy is based on the modified neighborhood design, and is presented to find the Pareto optimal solutions so as to provide a variety of compromise solutions to the decision makers. The purpose of the neural network is to generate a good initial population in order to speed up the searching by genetic algorithm. For demonstrating the feasibility of the proposed approach, four multi-objective optimization problems of reliability system are used, and the outcomes are compared with those from other methods. The evidence shows that the proposed GANNRS is more efficient in computation, and the results from the objectives are appealing.

Journal ArticleDOI
TL;DR: Fuzzy queuing characteristics are calculated via different membership functions and the results are compared on simulations and among models it is found that, Generalized Beta Distribution membership function is the one that minimized the queued characteristics.
Abstract: Queuing models need well defined knowledge on arrivals and service times. However, in real applications, because of some measurement errors or some loss of information, it is hard to achieve deterministic knowledge. Non-deterministic knowledge interferes or complicates analysis of the queuing model. Additionally, when the customers are asked about their impressions on waiting times or service times, mostly the answers are linguistic expressions like "I waited too much", "service was fast", and that the responses are. Linguistic statements and ill defined data make the sense of imprecision in the model. In this study, arrivals and service times are defined as fuzzy numbers in order to represent this imprecision. Fuzzy multi-channel queuing systems and membership functions are introduced in defining the arrivals and service times. Besides, a new membership function based on a probability function is studied. Fuzzy queuing characteristics are calculated via different membership functions and the results are compared on simulations. Among models it is found that, Generalized Beta Distribution membership function is the one that minimized the queuing characteristics.

Journal ArticleDOI
TL;DR: This study incorporates linearly and exponentially decreasing unit production costs during the mature stage of a product life cycle and presents a mathematical inventory model for production policy.
Abstract: It is common that prices of raw materials, parts or products decrease significantly after they come onto the market. High technology products are good examples, such as PCs, CPUs, DRAM, and mobile phones. Consequently, the traditional economic production quantity (EPQ) model assuming a constant unit production cost is no longer suitable for today's time-based competition. This study incorporates linearly and exponentially decreasing unit production costs during the mature stage of a product life cycle and presents a mathematical inventory model for production policy. A recursive algorithm is developed to obtain the optimal production schedule and a one-dimension search method is applied to find the optimal number of production cycles. In addition, numerical examples to illustrate the proposed model and its significance with or without considering a continuous reduction in unit production costs for the production policy are discussed as well.

Journal ArticleDOI
TL;DR: A bivariate distribution function is constructed to simultaneously incorporate the willing-to-pay price and the transaction cost into the classical economic order quantity (EOQ) model and the demand function faced by the supplier can be expressed as a concrete form.
Abstract: According to the marketing principle, a decision maker may control demand rate through selling price and the unit facility cost of promoting transaction. In fact, the upper bound of willing-to-pay price and the transaction cost probably depend upon the subjective judgment of individual consumer in purchasing merchandise. This study therefore attempts to construct a bivariate distribution function to simultaneously incorporate the willing-to-pay price and the transaction cost into the classical economic order quantity (EOQ) model. Through the manipulation of the constructed bivariate distribution function, the demand function faced by the supplier can be expressed as a concrete form. The proposed mathematical model mainly concerns how to determine the initial inventory level for each business cycle, so that the profit per unit time is maximized by means of the selling price and the unit-transaction cost to control the selling rate. Furthermore, the sensitivity analysis of optimal solution is performed and ...

Journal ArticleDOI
TL;DR: The aim of this paper is to develop an optimal periodic PM model by minimizing the expected cost rate per unit time with the consideration of reliability limit for repairable systems with degradation rate reduction after each PM.
Abstract: From the literature, it is known that preventive maintenance (PM) can reduce the deterioration of system or equipment to a younger level. Researchers usually develop optimal PM policies based on the assumption that the PM can reduce system's age or failure rate. However, the PM actions, such as cleaning, adjustment, alignment, and lubrication work, may not always reduce system's age or failure rate. Instead, it may only reduce the degradation rate of the system to a certain level. In addition, most of the existing optimal PM policies are developed by minimizing the expected cost rate only. Yet, as demonstrated in this paper, the system will have very low reliability at the time of preventive replacement if the reliability limit is not considered. Hence, this paper is to develop an optimal periodic PM model by minimizing the expected cost rate per unit time with the consideration of reliability limit for repairable systems with degradation rate reduction after each PM. The improvement factor method is used to measure the reduction effect of the degradation rate. The algorithm for searching the optimal solutions for this PM model is developed. Examples are also presented with discussions of parameter sensitivity and special cases.

Journal ArticleDOI
TL;DR: The results demonstrate that incorporating local search operators with a probabilistic scheme and delta method of fitness evaluation into the memetic algorithm significantly improves the search capabilities of the algorithm.
Abstract: The lecture timetabling problem is known to be a highly constrained combinatorial optimization problem. There have been many attempts to address this problem using integer programming, graph coloring and several heuristic search methods. However, since each university has its own timetable setting requirements, it is difficult to develop a general solution method. Thus, the work is generally done manually. This paper attempts to solve the lecture timetabling problem of the University of Asmara using a customized memetic algorithm that we have called ALTUMA. It is a hybrid of genetic algorithms with hill-climbing operators. The performance of ALTUMA was evaluated using data obtained from the University. Empirical results show that ALTUMA is capable of producing good results in a reasonable amount of time. Besides, the results demonstrate that incorporating local search operators with a probabilistic scheme and delta method of fitness evaluation into the memetic algorithm significantly improves the search capabilities of the algorithm.

Journal ArticleDOI
TL;DR: Using a dynamic programming formulation, an analysis is presented of both the first and second innings of a one-day cricket match assuming variation in type of ball bowled and subsequent selection of a strategy by the batsman.
Abstract: Using a dynamic programming formulation, an analysis is presented of both the first and second innings of a one-day cricket match assuming variation in type of ball bowled and subsequent selection of a strategy by the batsman. We assume that the team batting first uses the strategy to maximize the expected score, and the team batting second uses the strategy to maximize the probability of outscoring the first team and thus of maximizing the probability of a win. The dynamic programming formulation allows a calculation, at any stage of the innings, of the optimal scoring strategy depending on the type of ball bowled, along with an estimate of the maximum of the expected number of runs scored in the remainder of the first innings, and the maximum probability of a win in the second innings. Modifications are then introduced to examine the effect of tailender batsmen and a "fifth bowler". Finally a simulation is done to estimate the variance in total score by following the optimal strategy used in the first innings.

Journal ArticleDOI
TL;DR: The necessary and sufficient condition for its corresponding fluid model to be stable is obtained and the positive Harris recurrence of the 2-station-5-class re-entrant line is established via the fluid model.
Abstract: This paper considers a 2-station-5-class re-entrant line with infinite supply of work. We obtain the necessary and sufficient condition for its corresponding fluid model to be stable. As an application of the result, the positive Harris recurrence of the 2-station-5-class re-entrant line is established via the fluid model.

Journal ArticleDOI
TL;DR: The optimal repair-cost limits are derived analytically and the statistical non-parametric procedures to estimate them from the complete sample of repair cost are developed, and the discrete total time on test (DTTT) concept is introduced and applied to propose the resulting estimators.
Abstract: This paper addresses statistical estimation problems of the optimal repair-cost limits minimizing the long-run average costs per unit time in discrete seting. Two discrete repair-cost limit replacement models with/without imperfect repair are considered. We derive the optimal repair-cost limits analytically and develop the statistical non-parametric procedures to estimate them from the complete sample of repair cost. Then the discrete total time on test (DTTT) concept is introduced and applied to propose the resulting estimators. Numerical experiments through Monte Carlo simulation are provided to show their asymptotic convergence properties as the number of repair-cost data increases. A comprehensive bibliography in this research topic is also provided.

Journal ArticleDOI
TL;DR: An appropriate model for a customer to determine its optimal special order quantity when the supplier offers a special extended permissible delay for one time only during a specified period is established.
Abstract: In today's business environment, a supplier usually offers customers a permissible delay for settling outstanding account balance for the goods supplied. However, a supplier on occasion may allow this permissible delay in payments to be more than the usual during a given specified period. In this paper, we establish an appropriate model for a customer to determine its optimal special order quantity when the supplier offers a special extended permissible delay for one time only during a specified period. We then establish two theorems for a customer to find the optimal special order quantity. Finally, several numerical examples are given to illustrate the theoretical results.