scispace - formally typeset
Search or ask a question
Author

Gregory Dobson

Bio: Gregory Dobson is an academic researcher from University of Rochester. The author has contributed to research in topics: Heuristics & Scheduling (production processes). The author has an hindex of 23, co-authored 56 publications receiving 2591 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This problem faced by a monopolist as a mathematical program is formulated, how to obtain the market data from a sample of customers, what cost data are relevant, and a heuristic algorithm to solve the problem is suggested.
Abstract: A central problem in marketing is: how should the firm position reposition and price a line of related substitute products in order to maximize profits or welfare. We formulate this problem faced by a monopolist as a mathematical program, outline how to obtain the market data from a sample of customers, discuss what cost data are relevant, and suggest a heuristic algorithm to solve the problem. The output of the process is a list of products to offer, their prices, and the customer segments which purchase each product. While additional real world complexities, e.g., uncertainty about customer wants, product performance, and competitive response, are not modeled, we believe the system developed can serve as an important input into the decision process when new products are designed and priced. The methodology can be used as a part of a decision support system, where management specifies the number of products desired. The system suggests a few good solutions, together with the prices and customer segments served by each product. We use the standard assumption that the market is composed of different customer segments of various sizes, each containing homogeneous customers. Customers choose one brand only, the one that provides them with maximum value for the money. The firm faces both fixed and variable production and marketing costs for each product. Competition is either nonexistent, or assumed not to respond to the firm's moves. The information available to the firm is the sizes and preferences of the segments, based on a sample of customers, and the cost data. As an alternative to the traditional approach of estimating a parametric utility function, and aggregating customers into segments, we can also use the raw data as input, where each customer in the sample represents a segment. This, we believe, allows us to reduce the errors introduced in the process. Heuristics for solving the problem are suggested. The heuristics are evaluated on a set of simulated problems, and compared to the optimal solutions. The heuristics perform well when compared to all feasible solutions on a set of small simulated problems. We also discuss the application of the procedure to a 'real life' sized problem.

281 citations

Journal ArticleDOI
TL;DR: The main results characterize when feasible schedules exist, quantify the insensitivity of the schedules' costs to minor adjustments, and thus show how close the schedules will be to ones with optimal equal cycle times.
Abstract: This paper considers the Economic Lot Scheduling Problem: that is, the problem of scheduling several products on a single facility so as to minimize holding and setup costs. We develop a formulation that provides feasible schedules by allowing the lot sizes and thus the cycle times for each product to vary over time and by explicitly taking into account setup times. Our main results characterize when feasible schedules exist, quantify the insensitivity of the schedules' costs to minor adjustments, and thus show how close the schedules will be to ones with optimal equal cycle times. We also present a heuristic for finding good feasible schedules.

240 citations

Journal ArticleDOI
TL;DR: In this article, the authors formalize the problem as a mathematical program where the objective of the firm is either profit or total welfare, and they develop a new greedy heuristic for the profit problem, and its application to simulated problems shows that it too runs quickly, and with better performance than various alternatives and previously published heuristics.
Abstract: Designing and pricing a product-line is the very essence of every business. In recent years quantitative methods to assist managers in this task have been gaining in popularity. Conjoint analysis is already widely used to measure preferences for different product profiles, and build market simulation models. In the last few years several papers have been published that suggest how to optimally choose a product-line based on such data. We formalize this problem as a mathematical program where the objective of the firm is either profit or total welfare. Unlike alternative published approaches, we introduce fixed and variable costs for each product profile. The number of products to be introduced is endogenously determined on the basis of their desirability, fixed and variable costs, and in the case of profits, their cannibalization effect on other products. While the problem is difficult NP-complete, we show that the maximum welfare problem is equivalent to the uncapacitated plant location problem, which can be solved very efficiently using the greedy interchange heuristic. Based on past published experience with this problem, and on simulations we perform, we show that optimal or near optimal solutions are obtained in seconds for large problems. We develop a new greedy heuristic for the profit problem, and its application to simulated problems shows that it too runs quickly, and with better performance than various alternatives and previously published heuristics. We also show how the methodology can be applied, taking existing products of both the firm and the competition into account.

237 citations

Journal ArticleDOI
TL;DR: A worst-case analysis for two greedy heuristics for the integer programming problem minimize cx, Ax (ge) b, 0 (le) x (le), u, x integer, where the entries in A, b, and c are all nonnegative.
Abstract: We give a worst-case analysis for two greedy heuristics for the integer programming problem minimize cx, Ax ≥ b, 0 ≤ x ≤ u, x integer, where the entries in A, b, and c are all nonnegative. The first heuristic is for the case where the entries in A and b are integral, the second only assumes the rows are scaled so that the smallest nonzero entry is at least 1. In both cases we compare the ratio of the value of the greedy solution to that of the integer optimal. The error bound grows logarithmically in the maximum column sum of A for both heuristics.

202 citations

Journal ArticleDOI
TL;DR: An integer programming formulation for the problem of batching and scheduling of certain kinds of batch processors, generates a lower bound from a partial LP relaxation, provides a polynomial algorithm to solve a special case, and tests a set of heuristics on the general problem.
Abstract: This paper discusses the problem of batching and scheduling of certain kinds of batch processors. Examples of these processors include heat treatment facilities, particularly in the steel and ceramics industries, as well as a variety of operations in the manufacture of integrated circuits. In general, for our problem there is a set of jobs waiting to be processed. Each job is associated with a given family and has a weight or delay cost and a volume. The scheduler must organize jobs into batches in which each batch consists of jobs from a single family and in which the total volume of jobs in a batch does not exceed the capacity of the processor. The scheduler must then sequence all the batches. The processing time for a batch depends only on the family and not on the number or the volume of jobs in the batch. The objective is to minimize the mean weighted flow time.The paper presents an integer programming formulation for this problem, generates a lower bound from a partial LP relaxation, provides a polynomial algorithm to solve a special case, and tests a set of heuristics on the general problem. The ability to pack jobs into batches is the key to efficient solutions and is the basis of the different solution procedures in this paper. The heuristics include a greedy heuristic, a successive knapsack heuristic, and a generalized assignment heuristic. Optimal solutions are obtained by complete enumeration for small problems.The conclusions of the computational study show that the successive knapsack and generalized assignment heuristics perform better than the greedy. The generalized assignment heuristic does slightly better than the successive knapsack heuristic in some cases, but the latter is substantially faster and more robust. For problems with few jobs, the generalized assignment heuristic and the knapsack heuristic almost always provide optimal solutions. For problems with more jobs, we compare the heruistic solutions' values to lower bounds; the computational work suggests that the heuristics continue to provide solutions that are optimal or close to the optimal. The study also shows that the volume of the job relative to the capacity of the facility and the number of jobs in a family affect the performance of the heuristics, whereas the number of families does not. Finally, we give a worst-case analysis of the greedy heuristic.

158 citations


Cited by
More filters
Book ChapterDOI
01 Jan 2011
TL;DR: Weakconvergence methods in metric spaces were studied in this article, with applications sufficient to show their power and utility, and the results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables.
Abstract: The author's preface gives an outline: "This book is about weakconvergence methods in metric spaces, with applications sufficient to show their power and utility. The Introduction motivates the definitions and indicates how the theory will yield solutions to problems arising outside it. Chapter 1 sets out the basic general theorems, which are then specialized in Chapter 2 to the space C[0, l ] of continuous functions on the unit interval and in Chapter 3 to the space D [0, 1 ] of functions with discontinuities of the first kind. The results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables. " The book develops and expands on Donsker's 1951 and 1952 papers on the invariance principle and empirical distributions. The basic random variables remain real-valued although, of course, measures on C[0, l ] and D[0, l ] are vitally used. Within this framework, there are various possibilities for a different and apparently better treatment of the material. More of the general theory of weak convergence of probabilities on separable metric spaces would be useful. Metrizability of the convergence is not brought up until late in the Appendix. The close relation of the Prokhorov metric and a metric for convergence in probability is (hence) not mentioned (see V. Strassen, Ann. Math. Statist. 36 (1965), 423-439; the reviewer, ibid. 39 (1968), 1563-1572). This relation would illuminate and organize such results as Theorems 4.1, 4.2 and 4.4 which give isolated, ad hoc connections between weak convergence of measures and nearness in probability. In the middle of p. 16, it should be noted that C*(S) consists of signed measures which need only be finitely additive if 5 is not compact. On p. 239, where the author twice speaks of separable subsets having nonmeasurable cardinal, he means "discrete" rather than "separable." Theorem 1.4 is Ulam's theorem that a Borel probability on a complete separable metric space is tight. Theorem 1 of Appendix 3 weakens completeness to topological completeness. After mentioning that probabilities on the rationals are tight, the author says it is an

3,554 citations

Journal ArticleDOI
TL;DR: This paper looks inside the "black box" of product development at the fundamentaldecisions that are made by intention or default, adopting the perspective ofproduct development as a deliberate business process involving hundreds of decisions, many of which can be usefully supported by knowledge and tools.
Abstract: This paper is a review of research in product development, which we define as the transformation of a market opportunity into a product available for sale. Our review is broad, encompassing work in the academic fields of marketing, operations management, and engineering design. The value of this breadth is in conveying the shape of the entire research landscape. We focus on product development projects within a single firm. We also devote our attention to the development of physical goods, although much of the work we describe applies to products of all kinds. We look inside the "black box" of product development at the fundamentaldecisions that are made by intention or default. In doing so, we adopt the perspective of product development as a deliberate business process involving hundreds of decisions, many of which can be usefully supported by knowledge and tools. We contrast this approach to prior reviews of the literature, which tend to examine the importance of environmental and contextual variables, such as market growth rate, the competitive environment, or the level of top-management support.

1,725 citations

01 Jan 1989
TL;DR: This survey focuses on the area of deterministic machine scheduling, and reviews complexity results and optimization and approximation algorithms for problems involving a single machine, parallel machines, open shops, flow shops and job shops.

1,401 citations

Journal ArticleDOI
TL;DR: This survey reviews the forty-year history of research on transportation revenue management and covers developments in forecasting, overbooking, seat inventory control, and pricing, as they relate to revenue management.
Abstract: This survey reviews the forty-year history of research on transportation revenue management (also known as yield management). We cover developments in forecasting, overbooking, seat inventory control, and pricing, as they relate to revenue management, and suggest future research directions. The survey includes a glossary of revenue management terminology and a bibliography of over 190 references.

1,162 citations

Journal ArticleDOI
TL;DR: It is proved that there is an e > 0 such that Graph Coloring cannot be approximated with ratio n e unless P = NP, and Set Covering cannot be approximation with ratio c log n for any c < 1/4 unless NP is contained in DTIME(n poly log n).
Abstract: We prove results indicating that it is hard to compute efficiently good approximate solutions to the Graph Coloring, Set Covering and other related minimization problems. Specifically, there is an e > 0 such that Graph Coloring cannot be approximated with ratio n e unless P = NP. Set Covering cannot be approximated with ratio c log n for any c < 1/4 unless NP is contained in DTIME(n poly log n ). Similar results follow for related problems such as Clique Cover, Fractional Chromatic Number, Dominating Set, and others

1,025 citations