scispace - formally typeset
Search or ask a question
Author

Peter Brucker

Bio: Peter Brucker is an academic researcher from University of Osnabrück. The author has contributed to research in topics: Job shop scheduling & Flow shop scheduling. The author has an hindex of 39, co-authored 117 publications receiving 6757 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A classification scheme is provided, i.e. a description of the resource environment, the activity characteristics, and the objective function, respectively, which is compatible with machine scheduling and which allows to classify the most important models dealt with so far, and a unifying notation is proposed.

1,489 citations

Journal ArticleDOI
TL;DR: Consider the following generalization of the classical job-shop scheduling problem in which a set of machines is associated with each operation of a job, and a polynomial algorithm is derived.
Abstract: Consider the following generalization of the classical job-shop scheduling problem in which a set of machines is associated with each operation of a job. The operation can be processed on any of the machines in this set. For each assignment μ of operations to machines letP(μ) be the corresponding job-shop problem andf(μ) be the minimum makespan ofP(μ). How to find an assignment which minimizesf(μ)? For problems with two jobs a polynomial algorithm is derived.

526 citations

Journal ArticleDOI
TL;DR: A fast branch and bound algorithm for the job-shop scheduling problem has been developed and it solves the 10 × 10 benchmark problem which has been open for more than 20 years.

463 citations

Journal ArticleDOI
TL;DR: In this paper, the problem of scheduling n jobs on a batching machine to minimize regular scheduling criteria that are non-decreasing in the job completion times was studied, and it was shown that minimizing the weighted number of tardy jobs and the total weighted tardiness are NP-hard problems.
Abstract: We address the problem of scheduling n jobs on a batching machine to minimize regular scheduling criteria that are non-decreasing in the job completion times A batching machine is a machine that can handle up to b jobs simultaneously The jobs that are processed together form a batch, and all jobs in a batch start and complete at the same time The processing time of a batch is equal to the largest processing time of any job in the batch We analyse two variants: the unbounded model, where b⩾n; and the bounded model, where b1; for the case with m different processing times, we give a dynamic programming algorithm that requires O(b2m22m) time Moreover, we prove that due date based scheduling criteria give rise to NP-hard problems Finally, we show that an arbitrary regular cost function can be minimized in polynomial time for a fixed number of batches © 1998 John Wiley & Sons, Ltd

389 citations

Journal ArticleDOI
TL;DR: A branch and bound algorithm is presented for the resource-constrained project scheduling problem (RCPSP) and concepts of immediate selection are developed in connection with this branching scheme.

339 citations


Cited by
More filters
Proceedings Article
01 Jan 2010
TL;DR: Adaptive subgradient methods as discussed by the authors dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning, which allows us to find needles in haystacks in the form of very predictive but rarely seen features.
Abstract: We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and online learning which employ proximal functions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal function that can be chosen in hindsight. We give several efficient algorithms for empirical risk minimization problems with common and important regularization functions and domain constraints. We experimentally study our theoretical analysis and show that adaptive subgradient methods outperform state-of-the-art, yet non-adaptive, subgradient algorithms.

7,244 citations

Journal Article
TL;DR: This work describes and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal functions that can be chosen in hindsight.
Abstract: We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and online learning which employ proximal functions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal function that can be chosen in hindsight. We give several efficient algorithms for empirical risk minimization problems with common and important regularization functions and domain constraints. We experimentally study our theoretical analysis and show that adaptive subgradient methods outperform state-of-the-art, yet non-adaptive, subgradient algorithms.

6,984 citations

Journal ArticleDOI
TL;DR: This chapter presents the basic schemes of VNS and some of its extensions, and presents five families of applications in which VNS has proven to be very successful.

3,572 citations

Journal ArticleDOI
TL;DR: In this article, the authors survey the state-of-the-art in NFV and identify promising research directions in this area, and also overview key NFV projects, standardization efforts, early implementations, use cases, and commercial products.
Abstract: Network function virtualization (NFV) has drawn significant attention from both industry and academia as an important shift in telecommunication service provisioning. By decoupling network functions (NFs) from the physical devices on which they run, NFV has the potential to lead to significant reductions in operating expenses (OPEX) and capital expenses (CAPEX) and facilitate the deployment of new services with increased agility and faster time-to-value. The NFV paradigm is still in its infancy and there is a large spectrum of opportunities for the research community to develop new architectures, systems and applications, and to evaluate alternatives and trade-offs in developing technologies for its successful deployment. In this paper, after discussing NFV and its relationship with complementary fields of software defined networking (SDN) and cloud computing, we survey the state-of-the-art in NFV, and identify promising research directions in this area. We also overview key NFV projects, standardization efforts, early implementations, use cases, and commercial products.

1,634 citations