scispace - formally typeset
Search or ask a question

Showing papers on "Benchmark (computing) published in 1999"


Journal ArticleDOI
TL;DR: A "fast EP" (FEP) is proposed which uses a Cauchy instead of Gaussian mutation as the primary search operator and is proposed and tested empirically, showing that IFEP performs better than or as well as the better of FEP and CEP for most benchmark problems tested.
Abstract: Evolutionary programming (EP) has been applied with success to many numerical and combinatorial optimization problems in recent years. EP has rather slow convergence rates, however, on some function optimization problems. In the paper, a "fast EP" (FEP) is proposed which uses a Cauchy instead of Gaussian mutation as the primary search operator. The relationship between FEP and classical EP (CEP) is similar to that between fast simulated annealing and the classical version. Both analytical and empirical studies have been carried out to evaluate the performance of FEP and CEP for different function optimization problems. The paper shows that FEP is very good at search in a large neighborhood while CEP is better at search in a small local neighborhood. For a suite of 23 benchmark problems, FEP performs much better than CEP for multimodal functions with many local minima while being comparable to CEP in performance for unimodal and multimodal functions with only a few local minima. The paper also shows the relationship between the search step size and the probability of finding a global optimum and thus explains why FEP performs better than CEP on some functions but not on others. In addition, the importance of the neighborhood size and its relationship to the probability of finding a near-optimum is investigated. Based on these analyses, an improved FEP (IFEP) is proposed and tested empirically. This technique mixes different search operators (mutations). The experimental results show that IFEP performs better than or as well as the better of FEP and CEP for most benchmark problems tested.

3,412 citations


Journal ArticleDOI
TL;DR: A unique combination of high clock speeds and advanced microarchitectural techniques, including many forms of out-of-order and speculative execution, provide exceptional core computational performance in the 21264.
Abstract: Alpha microprocessors have been performance leaders since their introduction in 1992. The first generation 21064 and the later 21164 raised expectations for the newest generation-performance leadership was again a goal of the 21264 design team. Benchmark scores of 30+ SPECint95 and 58+ SPECfp95 offer convincing evidence thus far that the 21264 achieves this goal and will continue to set a high performance standard. A unique combination of high clock speeds and advanced microarchitectural techniques, including many forms of out-of-order and speculative execution, provide exceptional core computational performance in the 21264. The processor also features a high-bandwidth memory system that can quickly deliver data values to the execution core, providing robust performance for a wide range of applications, including those without cache locality. The advanced performance levels are attained while maintaining an installed application base. All Alpha generations are upward-compatible. Database, real-time visual computing, data mining, medical imaging, scientific/technical, and many other applications can utilize the outstanding performance available with the 21264.

828 citations


Journal ArticleDOI
TL;DR: This work presents a technique, based on the principles of optimal control, for determining the class of least restrictive controllers that satisfies the most important objective and shows how the proposed synthesis technique simplifies to well-known results from supervisory control and pursuit evasion games when restricted to purely discrete and purely continuous systems respectively.

678 citations


Book ChapterDOI
01 Jan 1999
TL;DR: A recently proposed metaheuristic, the Ant System, is used to solve the Vehicle Routing Problem in its basic form, i.e., with capacity and distance restrictions, one central depot and identical vehicles.
Abstract: In this paper we use a recently proposed metaheuristic, the Ant System, to solve the Vehicle Routing Problem in its basic form, i.e., with capacity and distance restrictions, one central depot and identical vehicles. A “hybrid” Ant System algorithm is first presented and then improved using problem-specific information (savings, capacity utilization). Experiments on various aspects of the algorithm and computational results for fourteen benchmark problems are reported and compared to those of other metaheuristic approaches such as Tabu Search, Simulated Annealing and Neural Networks.

432 citations


Posted Content
TL;DR: This paper proposes three cost-based heuristic algorithms: Volcano-SH and Volcano-RU, which are based on simple modifications to the Volcano search strategy, and a greedy heuristic that incorporates novel optimizations that improve efficiency greatly.
Abstract: Complex queries are becoming commonplace, with the growing use of decision support systems. These complex queries often have a lot of common sub-expressions, either within a single query, or across multiple such queries run as a batch. Multi-query optimization aims at exploiting common sub-expressions to reduce evaluation cost. Multi-query optimization has hither-to been viewed as impractical, since earlier algorithms were exhaustive, and explore a doubly exponential search space. In this paper we demonstrate that multi-query optimization using heuristics is practical, and provides significant benefits. We propose three cost-based heuristic algorithms: Volcano-SH and Volcano-RU, which are based on simple modifications to the Volcano search strategy, and a greedy heuristic. Our greedy heuristic incorporates novel optimizations that improve efficiency greatly. Our algorithms are designed to be easily added to existing optimizers. We present a performance study comparing the algorithms, using workloads consisting of queries from the TPC-D benchmark. The study shows that our algorithms provide significant benefits over traditional optimization, at a very acceptable overhead in optimization time.

336 citations


Proceedings ArticleDOI
09 Jan 1999
TL;DR: This work proposes hardware mechanisms that dynamically recognize and capitalize on "narrow-bitwidth" instances and reduces processor power consumption by using aggressive clock gating to turn off portions of integer arithmetic units that will be unnecessary for narrow bitwidth operations.
Abstract: In general-purpose microprocessors, recent trends have pushed towards 64 bit word widths, primarily to accommodate the large addressing needs of some programs. Many integer problems, however, rarely need the full 64 bit dynamic range these CPUs provide. In fact, another recent instruction set trend has been increased support for sub-word operations (that is, manipulating data in quantities less than the full word size). In particular, most major processor families have introduced "multimedia" instruction set extensions that operate in parallel on several sub-word quantities in the same ALU. This paper notes that across the SPECint95 benchmarks, over half of the integer operation executions require 16 bits or less. With this as motivation, our work proposes hardware mechanisms that dynamically recognize and capitalize on these "narrow-bitwidth" instances. Both optimizations require little additional hardware, and neither requires compiler support. The first, power-oriented, optimization reduces processor power consumption by using aggressive clock gating to turn off portions of integer arithmetic units that will be unnecessary for narrow bitwidth operations. This optimization results in an over 50% reduction in the integer unit's power consumption for the SPECint95 and MediaBench benchmark suites. The second optimization improves performance by merging together narrow integer operations and allowing them to share a single functional unit. Conceptually akin to a dynamic form of MMX, this optimization offers speedups of 4.3%-6.2% for SPECint95 and 8.0%-10.4% for MediaBench.

304 citations


Book ChapterDOI
11 Oct 1999
TL;DR: A model benchmark library should contain as diverse and large a set of problems as possible, be independent of any particular constraint solver, and contain neither just hard (nor just easy) problems.
Abstract: Constraint satisfaction algorithms are often benchmarked on hard, random problems. There are, however, many reasons for wanting a larger class of problems in our benchmark suites. For example, we may wish to benchmark algorithms on more realistic problems, to run competitions, or to study the impact on modelling and problem reformulation. Whilst there are many other constructive benefits of a benchmark library, there are also several potential pitfalls. For example, if the library is small, we run the risk of over-fitting our algorithms. Even if the library is large, certain problem features may be rare or absent. A model benchmark library should be easy to find and easy to use. It should contain as diverse and large a set of problems as possible. It should be easy to extend, and as comprehensive and up to date as possible. It should also be independent of any particular constraint solver, and contain neither just hard (nor just easy) problems.

287 citations


Journal ArticleDOI
TL;DR: A key feature in the implementation of Edmonds' blossom algorithm for solving minimum-weight perfect matching problems is the use of multiple search trees with an individual dual-change e for each tree.
Abstract: We make several observations on the implementation of Edmonds' blossom algorithm for solving minimum-weight perfect matching problems and we present computational results for geometric problem instances ranging in size from 1,000 nodes up to 5,000,000 nodes. A key feature in our implementation is the use of multiple search trees with an individual dual-change e for each tree. As a benchmark of the algorithm's performance, solving a 100,000-node geometric instance on a 200 Mhz Pentium-Pro computer takes approximately 3 minutes.

250 citations


Proceedings ArticleDOI
07 Jun 1999
TL;DR: Some general attacks on audio and image marking systems are described and a benchmark to compare image marking software on a fair basis is proposed, based on a set of attacks that any system ought to survive.
Abstract: Hidden copyright marks have been proposed as a solution for solving the illegal copying and proof of ownership problems in the context of multimedia objects. Many systems have been proposed, but it is still difficult to have even a rough idea of their performance and hence to compare them. So we first describe some general attacks on audio and image marking systems. Then we propose a benchmark to compare image marking software on a fair basis. This benchmark is based on a set of attacks that any system ought to survive.

232 citations


Proceedings ArticleDOI
01 Jun 1999
TL;DR: This paper presents a methodology for cycle-accurate simulation of energy dissipation in embedded systems and compared performance and energy computed by the simulator with measurements in hardware and found them in agreement within a 5% tolerance.
Abstract: This paper presents a methodology for cycle-accurate simulation of energy dissipation in embedded systems. The ARM Ltd. instruction-level cycle-accurate simulator is extended with energy models for the processor, the L2 cache, the memory, the interconnect and the DC-DC converter. A SmartBadge, which can be seen as an embedded system consisting of StrongARM-1100 processor, memory and the DC-DC converter, is used to evaluate the methodology with the Dhrystone benchmark. We compared performance and energy computed by our simulator with measurements in hardware and found them in agreement within a 5% tolerance. The simulation methodology was applied to design exploration for enhancing a SmartBadge with real-time MPEG feature.

215 citations


Book ChapterDOI
14 Jun 1999
TL;DR: An analysis of the memory usage for six of the Java programs in the SPECjvm98 benchmark suite finds that non-pointer data usually represents more than 50% of the allocated space for instance objects, that Java objects tend to live longer than objects in Smalltalk or ML, and that they are fairly small.
Abstract: We present an analysis of the memory usage for six of the Java programs in the SPECjvm98 benchmark suite. Most of the programs are real-world applications with high demands on the memory system. For each program, we measured as much low level data as possible, including age and size distribution, type distribution, and the overhead of object alignment. Among other things, we found that non-pointer data usually represents more than 50% of the allocated space for instance objects, that Java objects tend to live longer than objects in Smalltalk or ML, and that they are fairly small.

Proceedings ArticleDOI
01 May 1999
TL;DR: This work explores nonlinear array layout functions as an additional means of improving locality of reference and shows that two specific layouts have low implementation costs and high performance benefits and that recursion-based control structures may be needed to fully exploit their potential.
Abstract: Programming languages that provide multidimensional arrays and a flat linear model of memory must implement a mapping between these two domains to order array elements in memory. This layout function is fixed at language definition time and constitutes an invisible, non-programmable array attribute. In reality, modern memory systems are architecturally hierarchical rather than flat, with substantial differences in performance among different levels of the hierarchy. This mismatch between the model and the true architecture of memory systems can result in low locality of reference and poor performance. Some of this loss in performance can be recovered by re-ordering computations using transformations such as loop tiling. We explore nonlinear array layout functions as an additional means of improving locality of reference. For a benchmark suite composed of dense matrix kernels, we show by timing and simulation that two specific layouts (4D and Morton) have low implementation costs (2–5% of total running time) and high performance benefits (reducing execution time by factors of 1.1–2.5); that they have smooth performance curves, both across a wide range of problem sizes and over representative cache architectures; and that recursion-based control structures may be needed to fully exploit their potential.

Journal ArticleDOI
Sid Browne1
TL;DR: In this paper, the authors consider the portfolio problem in continuous-time where the objective of the investor or money manager is to exceed the performance of a given stochastic benchmark, as is often the case in institutional money management.
Abstract: We consider the portfolio problem in continuous-time where the objective of the investor or money manager is to exceed the performance of a given stochastic benchmark, as is often the case in institutional money management. The benchmark is driven by a stochastic process that need not be perfectly correlated with the investment opportunities, and so the market is in a sense incomplete. We first solve a variety of investment problems related to the achievement of goals: for example, we find the portfolio strategy that maximizes the probability that the return of the investor's portfolio beats the return of the benchmark by a given percentage without ever going below it by another predetermined percentage. We also consider objectives related to the minimization of the expected time until the investor beats the benchmark. We show that there are two cases to consider, depending upon the relative favorability of the benchmark to the investment opportunity the investor faces. The problem of maximizing the expected discounted reward of outperforming the benchmark, as well as minimizing the discounted penalty paid upon being outperformed by the benchmark is also discussed. We then solve a more standard expected utility maximization problem which allows new connections to be made between some specific utility functions and the nonstandard goal problems treated here.

Journal ArticleDOI
TL;DR: Presents a methodology for evaluating graphics recognition systems operating on images that contain straight lines, circles, circular arcs, and text blocks that enables an empirical comparison of vectorization software packages and uses practical performance evaluation methods that can be applied to complete vectorization systems.
Abstract: Presents a methodology for evaluating graphics recognition systems operating on images that contain straight lines, circles, circular arcs, and text blocks. It enables an empirical comparison of vectorization software packages and uses practical performance evaluation methods that can be applied to complete vectorization systems. The methodology includes a set of matching criteria for pairs of graphical entities, a set of performance evaluation metrics, and a benchmark for the evaluation of graphics recognition systems. The benchmark was tested on three systems. The results are reported and analyzed in the paper.

Proceedings ArticleDOI
01 Aug 1999
TL;DR: The paper describes the development of a benchmark for the evaluation of control strategies in wastewater treatment plants, a platform-independent simulation environment defining a plant layout, a simulation model, influent loads, test procedures and evaluation criteria.
Abstract: The paper describes the development of a benchmark for the evaluation of control strategies in wastewater treatment plants. The benchmark is a platform-independent simulation environment defining a plant layout, a simulation model, influent loads, test procedures and evaluation criteria. Several different research teams have contributed to the development of the benchmark and have obtained results using several simulation platforms (GPS-X™, Simulink™, Simba™, West™, FORTRAN code).

Journal ArticleDOI
TL;DR: A method that integrates path and timing analysis to accurately predict the worst-case execution time for real-time programs on high-performance processors and can exclude many infeasible program paths and calculate path information, such as bounds on number of loop iterations, without the need for manual annotations of programs.
Abstract: Previously published methods for estimation of the worst-case execution time on high-performance processors with complex pipelines and multi-level memory hierarchies result in overestimations owing to insufficient path and/or timing analysis. This does not only give rise to poor utilization of processing resources but also reduces the schedulability in real-time systems. This paper presents a method that integrates path and timing analysis to accurately predict the worst-case execution time for real-time programs on high-performance processors. The unique feature of the method is that it extends cycle-level architectural simulation techniques to enable symbolic execution with unknown input data values; it uses alternative instruction semantics to handle unknown operands. We show that the method can exclude many infeasible (or non-executable) program paths and can calculate path information, such as bounds on number of loop iterations, without the need for manual annotations of programs. Moreover, the method is shown to accurately analyze timing properties of complex features in high-performance processors using multiple-issue pipelines and instruction and data caches. The combined path and timing analysis capability is shown to derive exact estimates of the worst-case execution time for six out of seven programs in our benchmark suite.

Book ChapterDOI
01 Jan 1999
TL;DR: With the development of project scheduling models and methods arose the need for data instances in order to benchmark the solution procedures, and characteristics of the projects have to be identified to allow a systematic evaluation of the performance of algorithms.
Abstract: With the development of project scheduling models and methods arose the need for data instances in order to benchmark the solution procedures. Generally, benchmark instances can be distinguished by their origin into real world problems and artificial problems. The analysis of algorithmic performance on real world problem instances is of a high practical relevance, but at the same time it is only an analysis of individual cases. Consequently, general conclusions about the algorithms cannot be drawn. A solution procedure which shows very good performance on one real world instance might produce poor results on another. In order to allow a systematic evaluation of the performance of algorithms, characteristics of the projects have to be identified. The characteristics can then serve as the parameters for the systematic generation of artificial instances. The variation of the levels of these problem parameters in a full factorial design study allows to produce a set of well-balanced instances (cf. Montgomery 1976).

Journal ArticleDOI
TL;DR: The recently introduced Structured Messy Genetic Algorithm model for optimising water distribution network rehabilitation is expanded to include not only pipe rehabilitation decisions but also pumping installations and storage tanks as variables.

01 Jan 1999
TL;DR: The behavior of the SPEC95 benchmark suite over their course of execution correlating the behavior between IPC, branch prediction, value prediction, address prediction, cache performance, and reorder buffer occupancy is classified.
Abstract: Modern architecture research relies heavily on detailed pipeline simulation. Furthermore, programs often times exhibit interesting and important time varying behavior on an extremely large scale. Very little analysis has been conducted to classify the time varying behavior of popular benchmarks using detailed simulation for important architecture features. In this paper we classify the behavior of the SPEC95 benchmark suite over their course of execution correlating the behavior between IPC, branch prediction, value prediction, address prediction, cache performance, and reorder buffer occupancy. Branch prediction, cache performance, value prediction, and address prediction are currently some of the most influential architecture features driving microprocessor research, and we show important interactions and relationships between these features. In addition, we show that many programs have wildly different behavior during different parts of their execution, which makes the section of the program simulated of great importance to the relevance and correctness of a study. We show that the large scale behavior of the programs is cyclic in nature, point out the length of cyclic behavior for these programs, and suggest where to simulate to achieve results representative of the program as a whole.

Proceedings ArticleDOI
01 Jan 1999
TL;DR: It is shown that the communication protocols used by MPI runtime library are influential to the communication performance in applications, and that the benchmark codes have a wide spectrum of communication requirements.
Abstract: We present a study of the architectural requirements and scalability of the NAS Parallel Benchmarks. Through direct measurements and simulations, we identify the factors which affect the scalability of benchmark codes on two relevant and distinct platforms; a cluster of workstations and a ccNUMA SGI Origin 2000. We find that the benefit of increased global cache size is pronounced in certain applications and often offsets the communication cost. By constructing the working set profile of the benchmarks, we are able to visualize the improvement of computational efficiency under constant-problem-size scaling. We also find that, while the Origin MPI has better point-to-point performance, the cluster MPI layer is more scalable with communication load. However, communication performance within the applications is often much lower than what would be achieved by micro-benchmarks. We show that the communication protocols used by MPI runtime library are influential to the communication performance in applications, and that the benchmark codes have a wide spectrum of communication requirements.

Proceedings ArticleDOI
Scott Davidson1
28 Sep 1999
TL;DR: The goal of this benchmarking effort is to test new DFT techniques on real designs, using the DAT test generation system and two sequential test generators developed at the University of Iowa.
Abstract: The goal of this benchmarking effort is to test new DFT techniques on these real designs. Six panelists will present their preliminary results. Mario Konijnenburg of Philips will present full scan test generation results as a baseline, using the DAT test generation system. Raghuram Tupuri will present results from a hierarchical test generator, creating at-speed tests using functional knowledge without the need for scan. Professor J-E Santucci will describe a test generator for design verification tests, using techniques derived from software testing. Professor S . M. Reddy will describe results from two sequential test generators developed at the University of Iowa. Dr. Chouki Aktouf will describe techniques for the insertion of scan at the functional level. Professor Sujit Dey will describe the testabiiity of one of the benchmarks, and some functional BIST approaches.

Proceedings ArticleDOI
28 Mar 1999
TL;DR: This paper argues for an application-directed approach to benchmarking, using performance metrics that reflect the expected behavior of a particular application across a range of hardware or software platforms.
Abstract: Most performance analysis today uses either microbenchmarks or standard macrobenchmarks (e.g. SPEC, LADDIS, the Andrew benchmark). However, the results of such benchmarks provide little information to indicate how well a particular system will handle a particular application. Such results are, at best, useless and, at worst, misleading. In this paper we argue for an application-directed approach to benchmarking, using performance metrics that reflect the expected behavior of a particular application across a range of hardware or software platforms. We present three different approaches to application-specific measurement, one using vectors that characterize both the underlying system and an application, one using trace-driven techniques, and a hybrid approach. We argue that such techniques should become the new standard.

Proceedings ArticleDOI
01 Feb 1999
TL;DR: An ultra-fast placement algorithm targeted to FPGAs that can generate a placement for a 100,000-gate circuit in 10 seconds on a 300 MHz Sun UltraSPARC workstation that is only 33% worse than a high-quality placement that takes 524 seconds using a pure simulated annealing implementation.
Abstract: The demand for high-speed FPGA compilation tools has occured for three reasons: first, as FPGA device capacity has grown, the computation time devoted to placement and routing has grown more dramatically than the compute power of the available computers. Second, there exists a subset of users who are willing to accept a reduction in the quality of result in exchange for a high-speed compilation. Third, high-speed compile has been a long-standing desire of users of FPGA-based custom computing machines, since their compile time requirements are ideally closer to those of regular computers. This paper focuses on the placement phase of the compile process, and presents an ultra-fast placement algorithm targeted to FPGAs. The algorithm is based on a combination of multiple-level, bottom-up clustering and hierarchical simulated annealing. It provides superior area results over a known high-quality placement tool on a set of large benchmark circuits, when both are restricted to a short run time. For example, it can generate a placement for a 100,000-gate circuit in 10 seconds on a 300 MHz Sun UltraSPARC workstation that is only 33% worse than a high-quality placement that takes 524 seconds using a pure simulated annealing implementation. In addition, operating in its fastest mode, this tool can provide an accurate estimate of the wirelength achievable with good quality placement. This can be used, in conjunction with a routing predictor, to very quickly determine the routability of a given circuit on a given FPGA device.

Proceedings ArticleDOI
09 Jan 1999
TL;DR: It is found that the input and output values of basic blocks can be quite regular and predictable, suggesting that using compiler support to extend value prediction and reuse to a coarser granularity may have substantial performance benefits.
Abstract: Value prediction at the instruction level has been introduced to allow more aggressive speculation and reuse than previous techniques. We investigate the input and output values of basic blocks and find that these values can be quite regular and predictable, suggesting that using compiler support to extend value prediction and reuse to a coarser granularity may have substantial performance benefits. For the SPEC benchmark programs evaluated, 90% of the basic blocks have fewer than 4 register inputs, 5 live register outputs, 4 memory inputs and 2 memory outputs. About 16% to 41% of all the basic blocks are simply repeating earlier calculations when the programs are compiled with the -O2 optimization level in the GCC compiler. We evaluate the potential benefit of basic block reuse using a novel mechanism called a block history buffer. This mechanism records input and live output values of basic blocks to provide value prediction and reuse at the basic block level. Simulation results show that using a reasonably sized block history buffer to provide basic block reuse in a 4-way issue superscalar processor can improve execution time for the tested SPEC programs by 1% to 14% with an overall average of 9%.

Journal ArticleDOI
TL;DR: A noisy-vowel corpus is used and four possible models for audiovisual speech recognition are proposed, leading to proposals for data representation, fusion architecture, and control of the fusion process through sensor reliability.
Abstract: Audiovisual speech recognition involves fusion of the audio and video sensors for phonetic identification. There are three basic ways to fuse data streams for taking a decision such as phoneme identification: data-to-decision, decision-to-decision, and data-to-data. This leads to four possible models for audiovisual speech recognition, that is direct identification in the first case, separate identification in the second one, and two variants of the third early integration case, namely dominant recoding or motor recoding. However, no systematic comparison of these models is available in the literature. We propose an implementation of these four models, and submit them to a benchmark test. For this aim, we use a noisy-vowel corpus tested on two recognition paradigms in which the systems are tested at noise levels higher than those used for learning. In one of these paradigms, the signal-to-noise ratio (SNR) value is provided to the recognition systems, in the other it is not. We also introduce a new criterion for evaluating performances, based on transmitted information on individual phonetic features. In light of the compared performances of the four models with the two recognition paradigms, we discuss the advantages and drawbacks of these models, leading to proposals for data representation, fusion architecture, and control of the fusion process through sensor reliability.

Journal ArticleDOI
TL;DR: In this article, the use of genetic algorithms (GAs) for size optimization of trusses is demonstrated, and the concept of rebirthing is shown to be considerably effective for problems involving continuous design variables.
Abstract: This paper demonstrates the use of genetic algorithms (GAs) for size optimization of trusses. The concept of rebirthing is shown to be considerably effective for problems involving continuous design variables. Some benchmark examples are studied involving 4‐bar, 10‐bar, 64‐bar, 200‐bar and 940‐bar two‐dimensional trusses. Both continuous and discrete variables are considered.

Proceedings Article
Henry Kautz1, Joachim Paul Walser
18 Jul 1999
TL;DR: ILP-PLAN can find better quality solutions for a set of hard benchmark logistics planning problems than had been found by any earlier system.
Abstract: This paper describes ILP-PLAN, a framework for solving AI planning problems represented as integer linear programs. ILP-PLAN extends the planning as satisfiability framework to handle plans with resources, action costs, and complex objective functions. We show that challenging planning problems can be effectively solved using both traditional branch and-bound IP solvers and efficient new integer local search algorithms. ILP-PLAN can find better quality solutions for a set of hard benchmark logistics planning problems than had been found by any earlier system.

Proceedings ArticleDOI
22 Aug 1999
TL;DR: In this article, an integrated guidance and control (G&C) system is designed via an approximate solution to the nonlinear disturbance attenuation problem, and the integrated controller has been implemented in a high fidelity six-degree-of-freedom (6DOF) missile simulation that incorporates a fully coupled nonlinear aerodynamics model.
Abstract: The need to engage tactical ballistic missile (TBM) threats and high performance anti-ship cruise missiles is dictating the design of enhanced performance missile interceptors that can provide a high probability of kill. It can be argued that current interceptor guidance and control (G&C) designs are suboptimal because each of the G&C components are designed separately before they are made to interact together as a single functional unit. Ultimately, integrated G&C (IGC) design techniques might improve interceptor performance because: 1) the implicit interdependency of the classically separate G&C components could provide a positive synergism that is unrealized in the more conventional designs, and 2) an IGC design is formulated as a single optimization problem thus providing a unified approach to interceptor performance optimization. The prototype IGC system discussed in this paper is designed via an approximate solution to the nonlinear disturbance attenuation problem. Furthermore, the integrated controller has been implemented in a high fidelity six-degree-of-freedom (6DOF) missile simulation that incorporates a fully coupled nonlinear aerodynamics model. A high-performance benchmark missile G&C system has also been designed and incorporated to provide performance comparisons. In addition to a discussion of the solution methodology, 6DOF Monte Carlo simulation results are presented that compare the integrated concept to the benchmark G&C system. The simulation results to date show the IGC paradigm reduces both the mean and I-sigma miss statistics as compared to the benchmark system.

Posted Content
TL;DR: In this paper, the authors proposed new benchmark scenarios for the Higgs-boson search at LEP2, keeping m_t and M_SUSY fixed, and improved on the definition of the maximal mixing benchmark scenario defining precisely the values of all MSSM parameters such that the new m_h^max benchmark scenario yields the parameters which maximize the value of m_H for a given tan(beta).
Abstract: We suggest new benchmark scenarios for the Higgs-boson search at LEP2. Keeping m_t and M_SUSY fixed, we improve on the definition of the maximal mixing benchmark scenario defining precisely the values of all MSSM parameters such that the new m_h^max benchmark scenario yields the parameters which maximize the value of m_h for a given tan(beta). The corresponding scenario with vanishing mixing in the scalar top sector is also considered. We propose a further benchmark scenario with a relatively large value of |mu|, a moderate value of M_SUSY, and moderate mixing parameters in the scalar top sector. While the latter scenario yields m_h values that in principle allow to access the complete M_A-tan(beta)-plane at LEP2, on the other hand it contains parameter regions where the Higgs-boson detection can be difficult, because of a suppression of the branching ratio of its decay into bottom quarks.

Journal ArticleDOI
TL;DR: The thesis of this paper is that the business model plays the primary role in the development of a e-commerce benchmark, the business that determines processes and transactions and thus also the database and navigational designs.