scispace - formally typeset

Showing papers in "European Journal of Operational Research in 2001"


Journal ArticleDOI
TL;DR: This paper addresses the "super-efficiency" issue of Data Envelopment Analysis by using the slacks-based measure (SBM) of efficiency, which the author proposed in his previous paper [European Journal of Operational Research 130 (2001) 498].
Abstract: In most models of Data Envelopment Analysis (DEA), the best performers have the full efficient status denoted by unity (or 100%), and, from experience, we know that usually plural Decision Making Units (DMUs) have this “efficient status”. To discriminate between these efficient DMUs is an interesting subject. This paper addresses this “super-efficiency” issue by using the slacks-based measure (SBM) of efficiency, which the author proposed in his previous paper [European Journal of Operational Research 130 (2001) 498]. The method differs from the traditional one based on the radial measure, e.g. Andersen and Petersen model, in that the former deals directly with slacks in inputs/outputs, while the latter does not take account of the existence of slacks. We will demonstrate the rationality of our approach by comparing it with the radial measure of super-efficiency. The proposed method will be particularly useful when the number of DMUs are small compared with the number of criteria employed for evaluation.

2,133 citations


Journal ArticleDOI
Abstract: Systematic change of neighborhood within a possibly randomized local search algorithm yields a simple and effective metaheuristic for combinatorial and global optimization, called variable neighborhood search (VNS). We present a basic scheme for this purpose, which can easily be implemented using any local search algorithm as a subroutine. Its effectiveness is illustrated by solving several classical combinatorial or global optimization problems. Moreover, several extensions are proposed for solving large problem instances: using VNS within the successive approximation method yields a two-level VNS, called variable neighborhood decomposition search (VNDS); modifying the basic scheme to explore easily valleys far from the incumbent solution yields an efficient skewed VNS (SVNS) heuristic. Finally, we show how to stabilize column generation algorithms with help of VNS and discuss various ways to use VNS in graph theory, i.e., to suggest, disprove or give hints on how to prove conjectures, an area where metaheuristics do not appear to have been applied before.

1,732 citations


Journal ArticleDOI
TL;DR: The original rough set approach proved to be very useful in dealing with inconsistency problems following from information granulation, but is failing when preference-orders of attribute domains (criteria) are to be taken into account and it cannot handle inconsistencies following from violation of the dominance principle.
Abstract: The original rough set approach proved to be very useful in dealing with inconsistency problems following from information granulation. It operates on a data table composed of a set U of objects (actions) described by a set Q of attributes. Its basic notions are: indiscernibility relation on U, lower and upper approximation of either a subset or a partition of U, dependence and reduction of attributes from Q, and decision rules derived from lower approximations and boundaries of subsets identified with decision classes. The original rough set idea is failing, however, when preference-orders of attribute domains (criteria) are to be taken into account. Precisely, it cannot handle inconsistencies following from violation of the dominance principle. This inconsistency is characteristic for preferential information used in multicriteria decision analysis (MCDA) problems, like sorting, choice or ranking. In order to deal with this kind of inconsistency a number of methodological changes to the original rough sets theory is necessary. The main change is the substitution of the indiscernibility relation by a dominance relation, which permits approximation of ordered sets in multicriteria sorting. To approximate preference relations in multicriteria choice and ranking problems, another change is necessary: substitution of the data table by a pairwise comparison table, where each row corresponds to a pair of objects described by binary relations on particular criteria. In all those MCDA problems, the new rough set approach ends with a set of decision rules playing the role of a comprehensive preference model. It is more general than the classical functional or relational model and it is more understandable for the users because of its natural syntax. In order to workout a recommendation in one of the MCDA problems, we propose exploitation procedures of the set of decision rules. Finally, some other recently obtained results are given: rough approximations by means of similarity relations, rough set handling of missing data, comparison of the rough set model with Sugeno and Choquet integrals, and results on equivalence of a decision rule preference model and a conjoint measurement model which is neither additive nor transitive.

1,436 citations


Journal ArticleDOI
TL;DR: The purpose of this paper is to highlight some of the pitfalls that have been identified in application papers under each of these headings and to suggest protocols to avoid the pitfalls and guide the application of the methodology.
Abstract: The practical application of data envelopment analysis (DEA) presents a range of procedural issues to be examined and resolved including those relating to the homogeneity of the units under assessment, the input/output set selected, the measurement of those selected variables and the weights attributed to them. Each of these issues can present difficulties in practice. The purpose of this paper is to highlight some of the pitfalls that have been identified in application papers under each of these headings and to suggest protocols to avoid the pitfalls and guide the application of the methodology.

1,217 citations


Journal ArticleDOI
TL;DR: The motivations, extensions and generalizations of various models in each sub-class have been discussed in brief to bring out pertinent information regarding model developments in the last decade.
Abstract: This paper presents a review of the advances of deteriorating inventory literature since the early 1990s. The models available in the relevant literature have been suitably classified by the shelf-life characteristic of the inventoried goods. They have further been sub-classified on the basis of demand variations and various other conditions or constraints. The motivations, extensions and generalizations of various models in each sub-class have been discussed in brief to bring out pertinent information regarding model developments in the last decade.

1,169 citations


Journal ArticleDOI
TL;DR: Various approaches for treating undesirable outputs in the framework of Data Envelopment Analysis (DEA) are discussed and the resulting efficient frontiers are compared.
Abstract: Efficiency measurement is usually based on the assumption that inputs have to be minimized and outputs have to be maximized. In a growing number of applications, however, undesirable outputs are incorporated into the production model which have to be minimized. In this paper various approaches for treating such outputs in the framework of Data Envelopment Analysis (DEA) are discussed and the resulting efficient frontiers are compared. New radial measures are introduced which assume that any change of the output level will involve both undesirable and desirable outputs.

656 citations


Journal ArticleDOI
TL;DR: A generalised and extended decision matrix is constructed and rule and utility based techniques are developed for transforming various types of information within the matrix for aggregating attributes via ER that are equivalent with regard to underlying utility and rational in terms of preserving the features of original assessments.
Abstract: In this paper generic decision models and both rule and utility based techniques for transforming assessment information are developed to enhance an evidential reasoning (ER) approach for dealing with multiple attribute decision analysis (MADA) problems of both a quantitative and qualitative nature under uncertainties. In the existing ER approach, a modelling framework is established for representing subjective assessments under uncertainty, in which a set of evaluation grades for a qualitative attribute is defined. The attribute may then be assessed to one or more of these grades with certain degrees of belief. Using such a distributed assessment framework, the features of a range of evidence can be catered for whilst the assessor is not forced to pre-aggregate various types of evidence into a single numerical value. Both complete and incomplete assessments can be accommodated in a unified manner within the framework. For assessing different qualitative attributes, however, different sets of evaluation grades may need to be defined to facilitate data collection. Moreover, some attributes are quantitative and may be assessed using certain or random numbers. This increases complexity in attribute aggregation. In this paper, a generalised and extended decision matrix is constructed and rule and utility based techniques are developed for transforming various types of information within the matrix for aggregating attributes via ER. The transformation processes are characterised by a group of matrix equations. The techniques can be used in a hybrid way at different levels of an attribute hierarchy. It is proved in this paper that such transformations are equivalent with regard to underlying utility and rational in terms of preserving the features of original assessments. Complementary to distributed descriptions, utility intervals are introduced to describe and analyse incomplete and imprecise information. Two numerical examples are provided to demonstrate the implementation procedures of the new techniques and the potential and scope of the rule and utility based ER approach in supporting decision analysis under uncertainties.

631 citations


Journal ArticleDOI
TL;DR: This study compares the hybrid algorithms in terms of solution quality and computation time on a number of packing problems of different size and shows the effectiveness of the design of the different algorithms.
Abstract: In this paper we consider the two-dimensional (2D) rectangular packing problem, where a fixed set of items have to be allocated on a single object. Two heuristics, which belong to the class of packing procedures that preserve bottom-left (BL) stability, are hybridised with three meta-heuristic algorithms (genetic algorithms (GA), simulated annealing (SA), naive evolution (NE)) and local search heuristic (hill-climbing). This study compares the hybrid algorithms in terms of solution quality and computation time on a number of packing problems of different size. In order to show the effectiveness of the design of the different algorithms, their performance is compared to random search (RS) and heuristic packing routines.

458 citations


Journal ArticleDOI
TL;DR: A multiplicative decision model based on fuzzy majority is presented to choose the best alternatives, and several transformation functions are obtained to relate preference orderings and utility functions with multiplicative preference relations.
Abstract: A multiperson decision-making problem, where the information about the alternatives provided by the experts can be presented by means of different preference representation structures (preference orderings, utility functions and multiplicative preference relations) is studied. Assuming the multiplicative preference relation as the uniform element of the preference representation, a multiplicative decision model based on fuzzy majority is presented to choose the best alternatives. In this decision model, several transformation functions are obtained to relate preference orderings and utility functions with multiplicative preference relations. The decision model uses the ordered weighted geometric operator to aggregate information and two choice degrees to rank the alternatives, quantifier guided dominance degree and quantifier guided non-dominance degree. The consistency of the model is analysed to prove that it acts coherently.

419 citations


Journal ArticleDOI
TL;DR: This article aims at the systematic derivation of ecologically extended DEA models by incorporating a multi-dimensional value function f .
Abstract: The measurement of ecological efficiency provides some important information for the companies’ environmental management. Ecological efficiency is usually measured by comparing environmental performance indicators. Data envelopment analysis (DEA) shows a high potential to support such comparisons, as no explicit weights are needed to aggregate the indicators. In general, DEA assumes that inputs and outputs are ‘goods’, but from an ecological perspective also ‘bads’ have to be considered. In the literature, ‘bads’ are treated in different and sometimes arbitrarily chosen ways. This article aims at the systematic derivation of ecologically extended DEA models. Starting from the assumptions of DEA in production theory and activity analysis, a generalisation of basic DEA models is derived by incorporating a multi-dimensional value function f . Extended preference structures can be considered by different specifications of f , e.g. specifications for ecologically motivated applications of DEA.

395 citations


Journal ArticleDOI
TL;DR: The results obtained indicate that the TLRN is capable of predicting speed up to 5 minutes into the future with a high degree of accuracy, which represents substantial improvements on conventional model performance and clearly demonstrate the feasibility of using the object-oriented approach for short-term traffic prediction.
Abstract: This paper discusses an object-oriented neural network model that was developed for predicting short-term traffic conditions on a section of the Pacific Highway between Brisbane and the Gold Coast in Queensland, Australia. The feasibility of this approach is demonstrated through a time-lag recurrent network (TLRN) which was developed for predicting speed data up to 15 minutes into the future. The results obtained indicate that the TLRN is capable of predicting speed up to 5 minutes into the future with a high degree of accuracy (90–94%). Similar models, which were developed for predicting freeway travel times on the same facility, were successful in predicting travel times up to 15 minutes into the future with a similar degree of accuracy (93–95%). These results represent substantial improvements on conventional model performance and clearly demonstrate the feasibility of using the object-oriented approach for short-term traffic prediction.

Journal ArticleDOI
TL;DR: An efficient heuristic solution procedure that utilizes the solution generated from a Lagrangian relaxation of the problem is presented and results of extensive tests indicate that the solution method is both efficient and effective.
Abstract: We study an integrated logistics model for locating production and distribution facilities in a multi-echelon environment. Designing such logistics systems requires two essential decisions, one strategic (e.g., where to locate plants and warehouses) and the other operational (distribution strategy from plants to customer outlets through warehouses). The distribution strategy is influenced by the product mix at each plant, the shipments of raw material from vendors to manufacturing plants and the distribution of finished products from the plants to the different customer zones through a set of warehouses. First we provide a mixed integer programming formulation to the integrated model. Then, we present an efficient heuristic solution procedure that utilizes the solution generated from a Lagrangian relaxation of the problem. We use this heuristic procedure to evaluate the performance of the model with respect to solution quality and algorithm performance. Results of extensive tests on the solution procedure indicate that the solution method is both efficient and effective. Finally a `real-world' example is solved to explore the implications of the model.

Journal ArticleDOI
TL;DR: The suggested “adaptive” approach allows policymakers to cope with the uncertainties that confront them by creating policies that respond to changes over time and that make explicit provision for learning.
Abstract: Public policies must be devised in spite of profound uncertainties about the future. When there are many plausible scenarios for the future, it may well be impossible to construct any single static policy that will perform well in all of them. It is likely, however, that the uncertainties that confront planners will be resolved over the course of time by new information. Thus, policies should be adaptive – devised not to be optimal for a best estimate future, but robust across a range of plausible futures. Such policies should combine actions that are time urgent with those that make important commitments to shape the future and those that preserve needed flexibility for the future. In this paper, we propose an approach to policy formulation and implementation that explicitly confronts the pragmatic reality that policies will be adjusted as the world changes and as new information becomes available. Our suggested “adaptive” approach allows policymakers to cope with the uncertainties that confront them by creating policies that respond to changes over time and that make explicit provision for learning. The approach makes adaptation explicit at the outset of policy formulation. Thus, the inevitable policy changes become part of a larger, recognized process and are not forced to be made repeatedly on an ad hoc basis. This adaptive approach implies fundamental changes in the three major elements of policy-making: the analytical approach, the types of policies considered, and the decision-making process.

Journal ArticleDOI
TL;DR: A panorama of preference disaggregation methods is presented and the most important results and applications over the last 20 years are summarized.
Abstract: The philosophy of preference disaggregation in multicriteria decision-aid systems (MCDA) is to assess/infer global preference models from the given preferential structures and to address decision-aiding activities. This paper presents a panorama of preference disaggregation methods and summarises the most important results and applications over the last 20 years.

Journal ArticleDOI
TL;DR: To overcome the difficulties that DEA encounters when there is an excessive number of inputs or outputs, principal component analysis (PCA) is employed to aggregate certain, clustered data, whilst ensuring very similar results to those achieved under the original DEA model.
Abstract: US experience shows that deregulation of the airline industry leads to the formation of hub-and-spoke (HS) airline networks. Viewing potential HS networks as decision-making units, we use data envelopment analysis (DEA) to select the most efficient networks configurations from the many that are possible in the deregulated European Union airline market. To overcome the difficulties that DEA encounters when there is an excessive number of inputs or outputs, we employ principal component analysis (PCA) to aggregate certain, clustered data, whilst ensuring very similar results to those achieved under the original DEA model. The DEA–PCA formulation is then illustrated with real-world data gathered from the West European air transportation industry.

Journal ArticleDOI
TL;DR: It is shown in several examples that although the optimal schedule may be very different from that of the classical version of the problem, and the computational effort becomes significantly greater, polynomial-time solutions still exist.
Abstract: In many realistic settings, the production facility (a machine, a worker) improves continuously as a result of repeating the same or similar activities; hence, the later a given product is scheduled in the sequence, the shorter its production time. This “learning effect” is investigated in the context of various scheduling problems. It is shown in several examples that although the optimal schedule may be very different from that of the classical version of the problem, and the computational effort becomes significantly greater, polynomial-time solutions still exist. In particular, we introduce polynomial solutions for the single-machine makespan minimization problem, and two for multi-criteria single-machine problems and the minimum flow-time problem on parallel identical machines.

Journal ArticleDOI
TL;DR: A heuristic solution algorithm is developed that applies successive linear programming based on the reformulation and the relaxation of the original problem to produce feasible solutions with very small gaps between the solutions and their upper bound.
Abstract: We present a model for the optimization of a global supply that maximizes the after tax profits of a multinational corporation and that includes transfer prices and the allocation of transportation costs as explicit decision variables. The resulting mathematical formulation is a non-convex optimization problem with a linear objective function, a set of linear constraints, and a set of bilinear constraints. We develop a heuristic solution algorithm that applies successive linear programming based on the reformulation and the relaxation of the original problem. Our computational experiments investigate the impact of using different starting points. The algorithm produces feasible solutions with very small gaps between the solutions and their upper bound (UB).

Journal ArticleDOI
TL;DR: A way of mapping workflow into Petri nets, which can be used as a basis for such systems as well as an agreed and standard modelling technique.
Abstract: Despite their wide range of applications, workflow systems still suffer from lack of an agreed and standard modelling technique. It is a motivating research area and some researchers have proposed different modelling techniques. Petri nets, among the other techniques, are one of the mainly used modelling techniques for both qualitative and quantitative analysis of workflow and workflow systems. We have briefly presented a way of mapping workflow into Petri nets, which can be used as a basis for such systems. A lot of available papers on Petri net-based modelling of workflow have been reviewed and classified.

Journal ArticleDOI
TL;DR: A comparative analysis of the production processes and production management problems for the SM–CC–HR and the traditional cold charge process is given and planning and scheduling systems developed and methods used for SM– CC–HR production are reviewed.
Abstract: Iron and steel industry is an essential and sizable sector for industrialized economies. Since it is capital and energy extensive, companies have been putting consistent emphasis on technology advances in the production process to increase productivity and to save energy. The modern integrated process of steelmaking, continuous casting and hot rolling (SM–CC–HR) directly connects the steelmaking furnace, the continuous caster and the hot rolling mill with hot metal flow and makes a synchronized production. Such a process has many advantages over the traditional cold charge process. However, it also brings new challenges for production planning and scheduling. In this paper we first give a comparative analysis of the production processes and production management problems for the SM–CC–HR and the traditional cold charge process. We then review planning and scheduling systems developed and methods used for SM–CC–HR production. Finally some key issues for further research in this field are discussed.

Journal ArticleDOI
TL;DR: It is found that for 482 US firms in 1992, pollution prevention and end-of-pipe efficiencies are both negatively related to ROS, and that this negative relationship is larger and more significant for pollution prevention efficiencies.
Abstract: Environmental advocates maintain that waste minimization, recycling, remanufacturing and other environmental practices will greatly enhance the “bottom-line” for organizations. We investigate the differential relationships between (separately) pollution prevention and end-of-pipe efficiencies with short-run financial performance (measured using return on sales (ROS)). After controlling for both firm size and financial leverage, we find that for 482 US firms in 1992, pollution prevention and end-of-pipe efficiencies are both negatively related to ROS, and that this negative relationship is larger and more significant for pollution prevention efficiencies.

Journal ArticleDOI
TL;DR: This is the first experiment where the subjects created the alternatives and attributes themselves and suggests that the resulting weights are different because the methods explicitly or implicitly lead the decision makers to choose their responses from a limited set of numbers.
Abstract: The convergent validity of five multiattribute weighting methods is studied in an Internet experiment. This is the first experiment where the subjects created the alternatives and attributes themselves. Each subject used five methods to assess attribute weights – one version of the analytic hierarchy process (AHP), direct point allocation, simple multiattribute rating technique (SMART), swing weighting, and tradeoff weighting. They can all be used following the principles of multiattribute value theory. Furthermore, SMART, swing, and AHP ask the decision makers to give directly the numerical estimates of weight ratios although the elicitation questions are different. In earlier studies these methods have yielded different weights. Our results suggest that the resulting weights are different because the methods explicitly or implicitly lead the decision makers to choose their responses from a limited set of numbers. The other consequences from this are that the spread of weights and the inconsistency between the preference statements depend on the number of attributes that a decision maker considers simultaneously.

Journal ArticleDOI
TL;DR: The role of vertical co-op advertising efficiency with respect to transactions between a manufacturer and a retailer through brand name investments, local advertising expenditures, and sharing rules of advertising expenses is explored.
Abstract: In the literature of cooperative (co-op) advertising, the focus of research is on a relationship in which a manufacturer is the leader and retailers are followers. This relationship implies the dominance of the manufacturer over retailers. Recent market structure reviews have shown a shift of retailing power from manufacturers to retailers. Retailers have equal or even greater power than a manufacturer when it comes to retailing. Based on this new market phenomenon, we intend to explore the role of vertical co-op advertising efficiency with respect to transactions between a manufacturer and a retailer through brand name investments, local advertising expenditures, and sharing rules of advertising expenses. Three co-op advertising models are discussed which are based on two noncooperative games and one cooperative game. In a leader–follower noncooperative game, the manufacturer is assumed to be a leader who first specifies the brand name investment and the co-op subsidization policy. The retailer, as a follower, then decides on the local advertising level. In a noncooperative simultaneous move game, the manufacturer and the retailer are assumed to act simultaneously and independently. In a cooperative game, the system profit is maximized for every Pareto efficient co-op advertising scheme, but not for any other schemes. All Pareto efficient co-op advertising schemes are associated with a single local advertising level and a single brand name investment level, but with variable sharing policies of advertising expenses. The best Pareto efficient advertising scheme is obtained taking members' risk attitudes into account. Utilizing the Nash bargaining model, we discuss two situations that (a) both members are risk averse, and (b) both members are risk neutral. Our results are consistent with the bargaining literature.

Journal ArticleDOI
TL;DR: A heuristic procedure based on the genetic algorithm for determining a dynamic berth assignment to ships in the public berth system is developed and it is shown that the proposed algorithm is adaptable to real world applications.
Abstract: This paper addresses the problem of determining a dynamic berth assignment to ships in the public berth system. While the public berth system may not be suitable for most container ports in major countries, it is desired for higher cost-effectiveness in Japan’s ports. The berth allocation to calling ships is a key factor for efficient public berthing. However, it is not calculated in polynomially-bounded time. To obtain a good solution with considerably small computational effort, we developed a heuristic procedure based on the genetic algorithm. We conducted a large amount of computational experiments which showed that the proposed algorithm is adaptable to real world applications.

Journal ArticleDOI
TL;DR: A hybrid genetic algorithm for the container loading problem with boxes of different sizes and a single container for loading that uses specific genetic operators based on an integrated greedy heuristic to generate offspring.
Abstract: This paper presents a hybrid genetic algorithm (GA) for the container loading problem with boxes of different sizes and a single container for loading. Generated stowage plans include several vertical layers each containing several boxes. Within the procedure, stowage plans are represented by complex data structures closely related to the problem. To generate offspring, specific genetic operators are used that are based on an integrated greedy heuristic. The process takes several practical constraints into account. Extensive test calculations including procedures from other authors vouch for the good performance of the GA, above all for problems with strongly heterogeneous boxes.

Journal ArticleDOI
TL;DR: An overview of the research, models and literature about optimisation approaches to the problem of optimally locating one or more new facilities in an environment where competing facilities are already established is given.
Abstract: We give an overview of the research, models and literature about optimisation approaches to the problem of optimally locating one or more new facilities in an environment where competing facilities are already established

Journal ArticleDOI
TL;DR: An algorithm is presented that can find shortest order picking tours in this type of warehouses and it appears that in many cases the average order picking time can be decreased significantly by adding a middle aisle to the layout.
Abstract: This paper considers a parallel aisle warehouse, where order pickers can change aisles at the ends of every aisle and also at a cross aisle halfway along the aisles. An algorithm is presented that can find shortest order picking tours in this type of warehouses. The algorithm is applicable in warehouse situations with up to three aisle changing possibilities. Average tour length is compared for warehouses with and without a middle aisle. It appears that in many cases the average order picking time can be decreased significantly by adding a middle aisle to the layout.

Journal ArticleDOI
TL;DR: A model to study and analyze the benefit of coordinating supply chain inventories through the use of common replenishment epochs or time periods for a one-vendor, multi-buyer supply chain for a single product is proposed.
Abstract: This paper proposes a model to study and analyze the benefit of coordinating supply chain inventories through the use of common replenishment epochs or time periods. A one-vendor, multi-buyer supply chain for a single product is analyzed. Under the proposed strategy, the vendor specifies common replenishment periods and requires all buyers to replenish only at those time periods. The vendor offers a price discount to entice the buyers to accept this strategy. The optimal replenishment period and the price discount to be offered by the vendor are determined as a solution to a Stackelberg game. After developing a method to solve the game, a numerical study is conducted to evaluate the benefit of the proposed coordinated strategy.

Journal ArticleDOI
Abstract: One fundamental aspect of the variable precision rough sets (VPRS) model involves a search for subsets of condition attributes which provide the same information for classification purposes as the full set of available attributes. Such subsets are labelled `approximate reducts' or `β-reducts', being defined for a specified classification error denoted by β. This paper undertakes a further investigation of the criteria for a β-reduct within VPRS. Certain anomalies and interesting implications are identified. An additional condition is suggested for finding β-reducts which assures a more general level knowledge equivalent to that of the full set of attributes.

Journal ArticleDOI
TL;DR: A justification of two qualitative counterparts of the expected utility criterion for decision under uncertainty, which only require bounded, linearly ordered, valuation sets for expressing uncertainty and preferences, and proposes an operationally testable description of possibility theory.
Abstract: This paper presents a justification of two qualitative counterparts of the expected utility criterion for decision under uncertainty, which only require bounded, linearly ordered, valuation sets for expressing uncertainty and preferences This is carried out in the style of Savage, starting with a set of acts equipped with a complete preordering relation Conditions on acts are given that imply a possibilistic representation of the decision-maker uncertainty In this framework, pessimistic (ie, uncertainty-averse) as well as optimistic attitudes can be explicitly captured The approach thus proposes an operationally testable description of possibility theory

Journal ArticleDOI
TL;DR: A fuzzy G.P. approach is applied to the optimum portfolio for a private investor, taking into account three criteria: return, risk and liquidity, where the goals and the constraints are fuzzy.
Abstract: Portfolio selection is a usual multiobjective problem. This paper will try to deal with the optimum portfolio for a private investor, taking into account three criteria: return, risk and liquidity. These objectives, in general, are not crisp from the point of view of the investor, so we will deal with them in fuzzy terms. The problem formulation is a goal programming (G.P.) one, where the goals and the constraints are fuzzy. We will apply a fuzzy G.P. approach to the above problem to obtain a solution. Then, we will offer the investor help in handling the results.