scispace - formally typeset
Search or ask a question
Author

Toni Farley

Bio: Toni Farley is an academic researcher from Arizona State University. The author has contributed to research in topics: Job scheduler & Scheduling (computing). The author has an hindex of 8, co-authored 23 publications receiving 406 citations. Previous affiliations of Toni Farley include Translational Genomics Research Institute.

Papers
More filters
Journal Article
TL;DR: The metrics presented in this paper can provide the criteria necessary for such QoS assurance and represent the different technological aspects of these network applications, such as time-dependence and symmetry.
Abstract: In this paper, we present quality of service (QoS) metrics for various network applications based on human factors and technology attributes. The first term, human factors, addresses human perception of different kinds of media, such as conventional text, audio and video. The second term, technology attributes, represents the different technological aspects of these network applications, such as time-dependence and symmetry. Both of these terms are key factors that lead to variations of requirements for QoS. Establishing these requirements is paramount to providing QoS on computer networks and the Internet. With the metrics presented in this paper we can provide the criteria necessary for such QoS assurance.

216 citations

Journal ArticleDOI
TL;DR: A consistent pattern in the plot of WTV over mean of all possible sequences for a set of jobs is discovered, which can be used to evaluate the sacrifice of mean waiting time while pursuing WTV minimization.

47 citations

Journal ArticleDOI
Nong Ye1, Esma S. Gel1, Xueping Li1, Toni Farley1, Ying-Cheng Lai1 
TL;DR: This paper presents web server QoS models that use a single queue, along with scheduling rules from production planning in the manufacturing domain, to differentiate QoS for classes of web service requests with different priorities.

39 citations

Journal Article
TL;DR: This paper presents the System-Fault-Risk (SFR) framework for cyber attack classification, which is based on a scientific foundation, combining theories from system engineering, fault modeling, and risk-assessment.
Abstract: Computer and network systems fall victim to many cyber attacks of different forms. To reduce the risks of cyber attacks, an organization needs to understand and assess them, make decisions about what types of barriers or protection mechanisms are necessary to defend against them, and decide where to place such mechanisms. Understanding cyber attack characteristics (threats, attack activities, state and performance impact, etc.) helps in choosing effective barriers. Understanding the assets affected by cyber attacks helps decide where to place such barriers. To develop these understandings, we classify attacks in a comprehensive, sensible format. This paper presents the System-Fault-Risk (SFR) framework for cyber attack classification, which we base on a scientific foundation, combining theories from system engineering, fault modeling, and risk-assessment. Our work extends existing classifications with a focus on separating cause and effect, and further refining effects to include state and performance.

21 citations

Patent
10 May 2004
TL;DR: In this article, a batch of jobs to be processed are arranged by a routine called the "Yelf Spiral" in which the list of jobs is begun by placing the smallest job at the center of the list and each succeeding larger job (the next smallest) is placed alternately to the left and right of the smallest jobs until the largest job has been placed on the list.
Abstract: Job scheduling techniques to reduce the variance of waiting time for stable performance delivery jobs from requesting entities such as PCs connected by a network to a resource such as a server in batches. Each batch contains N or less jobs. A first, waiting buffer can receive the requests and a second, processing buffer receives each batch from the waiting buffer. A batch of jobs to be processed are arranged by a routine called the “Yelf Spiral” in which the list of jobs is begun by placing the smallest job at the center of the list and each succeeding larger job (the next smallest) is placed alternately to the left and right of the smallest job until the largest job has been placed on the list. The jobs are then performed in the order that places the largest job last.

19 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: It is suggested that different social science methodologies, such as psychology, cognitive science and human behavior might implement DMT, as an alternative to the methodologies already on offer, and the direction of any future developments in DMT methodologies and applications is discussed.
Abstract: In order to determine how data mining techniques (DMT) and their applications have developed, during the past decade, this paper reviews data mining techniques and their applications and development, through a survey of literature and the classification of articles, from 2000 to 2011. Keyword indices and article abstracts were used to identify 216 articles concerning DMT applications, from 159 academic journals (retrieved from five online databases), this paper surveys and classifies DMT, with respect to the following three areas: knowledge types, analysis types, and architecture types, together with their applications in different research and practical domains. A discussion deals with the direction of any future developments in DMT methodologies and applications: (1) DMT is finding increasing applications in expertise orientation and the development of applications for DMT is a problem-oriented domain. (2) It is suggested that different social science methodologies, such as psychology, cognitive science and human behavior might implement DMT, as an alternative to the methodologies already on offer. (3) The ability to continually change and acquire new understanding is a driving force for the application of DMT and this will allow many new future applications.

563 citations

Journal ArticleDOI
TL;DR: Several typical covering models and their extensions ordered from simple to complex are introduced, including Location Set Covering Problem (LSCP), Maximal Covering Location Problem (MCLP), Double Standard Model (DSM), Maximum Expected Covering location problem (MEXCLP, and Maximum Availability Location problem (MALP) models.
Abstract: With emergencies being, unfortunately, part of our lives, it is crucial to efficiently plan and allocate emergency response facilities that deliver effective and timely relief to people most in need. Emergency Medical Services (EMS) allocation problems deal with locating EMS facilities among potential sites to provide efficient and effective services over a wide area with spatially distributed demands. It is often problematic due to the intrinsic complexity of these problems. This paper reviews covering models and optimization techniques for emergency response facility location and planning in the literature from the past few decades, while emphasizing recent developments. We introduce several typical covering models and their extensions ordered from simple to complex, including Location Set Covering Problem (LSCP), Maximal Covering Location Problem (MCLP), Double Standard Model (DSM), Maximum Expected Covering Location Problem (MEXCLP), and Maximum Availability Location Problem (MALP) models. In addition, recent developments on hypercube queuing models, dynamic allocation models, gradual covering models, and cooperative covering models are also presented in this paper. The corresponding optimization techniques to solve these models, including heuristic algorithms, simulation, and exact methods, are summarized.

356 citations

Proceedings Article
01 Jan 2000
TL;DR: This work proposes and evaluates several algorithms for supporting advanced reservation of resources in supercomputing scheduling systems and finds that the wait times of applications submitted to the queue increases when reservations are supported and the increase depends on how reservations aresupported.
Abstract: Some computational grid applications have very large resource requirements and need simultaneous access to resources from more than one parallel computer. Current scheduling systems do not provide mechanisms to gain such simultaneous access without the help of human administrators of the computer systems. In this work, we propose and evaluate several algorithms for supporting advanced reservation of resources in supercomputing scheduling systems. These advanced reservations allow users to request resources from scheduling systems at specific times. We find that the wait times of applications submitted to the queue increases when reservations are supported and the increase depends on how reservations are supported. Further, we find that the best performance is achieved when we assume that applications can be terminated and restarted, backfilling is performed, and relatively accurate run-time predictions are used.

303 citations

01 Jan 1977

286 citations