scispace - formally typeset
Search or ask a question

Showing papers on "Applied science published in 2012"



Book
02 Nov 2012
TL;DR: While dealing with theorems and algorithms, emphasis is laid on constructions which consist of formal proofs, examples with applications for various algorithms and applications with examples.
Abstract: The book has many important features which make it suitable for both undergraduate and postgraduate students in various branches of engineering and general and applied sciences. The important topics interrelating Mathematics & Computer Science are also covered briefly. The book is useful to readers with a wide range of backgrounds including Mathematics, Computer Science/Computer Applications and Operational Research. While dealing with theorems and algorithms, emphasis is laid on constructions which consist of formal proofs, examples with applications. Uptill, there is scarcity of books in the open literature which cover all the things including most importantly various algorithms and applications with examples.

20 citations


Journal ArticleDOI
Vinton G. Cerf1
TL;DR: The research question of this thesis is, is it possible to collect data from a build process of a large scale software project, in order to understand, predict, and prevent problems in the quality and productivity of the actual system?
Abstract: Managing large software projects is intrinsically difficult. Although, high software quailty is a definite must, other issues like time and cost play major roles in large software development. For example, if a software company can produce the highest quality products but cannot predict how long and how much it is going to cost, then that company will not have any business. Software metrics are one answer to those problems. Software metrics are the measurement of periodic progress towards a goal [3]. Metrics are used to indicate various problems in a development process. Currently, there are a large number of documented metrics. However, there does not exist one perfect formula to to satisfy every development’s quality and productivity. The key to a good software metrics program is to be able to identify the specific goals of the development and be able to assist in reaching these goals. I will addres this concept through the development of specific measurements and analyses that will improve the quality of a specific system, the Mission Data System (MDS) at the Jet Propulsion Laboratory. I will attempt to identify certain software metrics that can help JPL reach their development goals. To accomplish this I have created the Hackystat Jet Propulstion Laboratory Build System (hackyJPLBuild). This system measures and analyzes the build system of MDS. The research question of this thesis is, is it possible to collect data from a build process of a large scale software project, in order to understand, predict, and prevent problems in the quality and productivity of the actual system. To evaluate this research question I will conduct three case studies: (1) can the hackyJPLBuild system accurately represent the build process of MDS, (2) can threshold values indicate problematic issues in MDS, and (3) can hackyJPLBuild predict future problematic issues in MDS. Initial results of case study 1 indicate that hackyJPLBuild can accurately represent the build process of MDS. In fact, hackyJPLBuild has already identified some pontetial flaws in the MDS build process. Case studies 2 and 3 have not been conducted yet.

20 citations


BookDOI
01 Jan 2012
TL;DR: The paper presents an open-source solution for QoS monitoring of VoIP traffic that achieves high performance at significantly lower costs, and exploits the performance capabilities achievable with a fieldprogrammable gate array (FPGA).
Abstract: A key issue in VoIP services market is the availability of tools that permit a constant monitoring of the relevant Quality of Service (QoS) parameters. Several commercial and open-source solutions are available, based on dedicated hardware and/or open-source software. These solutions aim to achieve a tradeoff between performance and instrumentation cost. In general, high performance and precise monitoring tools are based on dedicated hardware, which is expensive. In contrast, cheaper software-based solutions working on top of a Commercially available Off-The-Shelf (COTS) hardware are performance-limited, especially when serving high-capacity links. In this context, the paper presents an open-source solution for QoS monitoring of VoIP traffic that achieves high performance at significantly lower costs. The proposed solution exploits the performance capabilities achievable with a fieldprogrammable gate array (FPGA). The associated costs reduction arises from the high flexibility of an FPGA. Our experimental analysis explores the accuracy of the developed prototype, measuring against relevant QoS parameters of VoIP traffic on high capacity links.

15 citations


Journal ArticleDOI

12 citations


MonographDOI
01 Dec 2012
TL;DR: In this article, the authors discuss the importance of reference collections in academic and research library reference collections and provide a crucial tool for academicians, researchers, and practitioners and is ideal forclassroom use.
Abstract: Market: This premier publication is essential for all academic and research library reference collections. It is a crucial tool for academicians, researchers, and practitioners and is ideal forclassroom use. Khairiyah Mohd Yusof (Universiti Teknologi Malaysia, Malaysia), Naziha Ahmad Azli (Universiti Teknologi Malaysia, Malaysia), Azlina Mohd Kosnin (Universiti Teknologi Malaysia, Malaysia), Sharifah Kamilah (Universiti Teknologi Malaysia, Malaysia), Syed Yusof (Universiti Teknologi Malaysia, Malaysia), & Yudariah Mohammad Yusof (Universiti Teknologi Malaysia, Malaysia)

12 citations


Journal ArticleDOI
Deepak Kumar1
TL;DR: Despite repeated warnings from mass media in the last few years about the data deluge somehow I have caught myself sitting at the edge of yet another boundary: this idea of the increasing centrality of data or information.
Abstract: Reflections Data science overtakes computer science? Quick, read the question below, then close your eyes and try to answer it. What is an exabyte? An exabyte, or EB, is a billion gigabytes, a million terabytes, or the amount of memory you can access with a 64-bit address (16 EB to be more precise). You can buy a 1-terabyte hard disk today for less than $100. A million terabytes of storage is still a sizable investment. Ever wonder what you would do with all the data if you manage to fill it all up? As humanity we are now generating exabytes of data on a daily basis. A 2010 article in the Economist magazine estimated that in 2005 we created a total of 150 exa-bytes of information [1]. Estimates of 1200 exabytes were thrown about for 2010. It must be good to be in the data storage industry. One of the grand challenges of our time has to be the management, storage, and handling of large amounts of data (not to mention the amount of energy this requires). While there are definite advantages to having so much data available, it is also becoming increasingly difficult to process and exploit so much data. The Economist article calls it \" plucking the diamond from the waste \". Luciano Floridi, a self-proclaimed philosopher of information (University of Hertfordshire, UK) predicts that soon we will be drowning in the age of the zettabyte data deluge [2]. One of the things that excite me the most about computer science is how it is constantly pushing the boundaries. The stuff at the edge: artificial intelligence in the 1980s, for example. Glancing over at my bookshelf recently I noticed that my AI books have gradually receded into the far bookcase of my office. They used to be right behind me so I would have ready access. Now, migration at this scale may not mean anything, but you'd have to see the books I found myself starting at, in that prime spot. I now possess a couple of shelves of books on data, visualization, statistical analysis, and related material. Despite repeated warnings from mass media in the last few years about the data deluge somehow I have caught myself sitting at the edge of yet another boundary: this idea of the increasing centrality of data or information. For the past two years I have been involved with the Center …

6 citations



Journal ArticleDOI
TL;DR: In this article, the meaning of safety science and engineering discipline is introduced, i.e., the basic knowledge system of safety sciences and engineering suitable for the level of students' physical and mental state and educational requirements, which is composed in accordance with the teaching theories for training of engineering bachelor, master and doctors.

5 citations






01 Jan 2012
TL;DR: In this article, the meaning of safety science and engineering discipline is introduced, i.e., the basic knowledge system of safety sciences and engineering suitable for the level of students' physical and mental state and educational requirements, which is composed in accordance with the teaching theories for training of engineering bachelor, master and doctors.
Abstract: Through the discussion of the basic meaning and construction of "discipline" concept, the meaning of safety science and engineering discipline is introduced, i.e., the basic knowledge system of safety science and engineering suitable for the level of students' physical and mental state and educational requirements, which is composed in accordance with the teaching theories for training of engineering bachelor, master and doctors and by selecting and organizing the contents of safety science and safety engineering, in order to meet the teaching needs for senior talents in the field of safety science and safety engineering. The discipline of safety science and engineering must be positioned in the knowledge system focusing on logical laws of the safety science system and revealing the quintessenti al law and practice law of safety phenomenon. This knowledge system shall include at least two parts: basic safety science knowledge system (including safety principles, risk principles, safety jurisprudence, safety economics and other knowledge) and safety engineering science knowledge system (including risk management and measurement, damage control, safety engineering, etc.)

Proceedings ArticleDOI
03 Oct 2012
TL;DR: The suggestion that one distinction between the activities of science and engineering is the role of abstract thinking is made may help to clarify the difference between engineering and science in a way that is less prone to value judgments based on supposed differences between pure and applied activity.
Abstract: A goal of the philosophy of engineering and engineering education is to more clearly distinguish engineering and engineering education from science and science education. This paper advances the suggestion that one distinction between the activities of science and engineering is the role of abstract thinking. Engineering and science both engage in abstract thinking, but the direction of the abstraction process points in different directions for the two disciplines. The view advanced in this paper may help to clarify the difference between engineering and science in a way that is less prone to value judgments based on supposed differences between pure and applied activity. Science proceeds towards abstract theory; engineering proceeds from the abstract idea of function.

01 Jan 2012
Abstract: The discipline of materials science and engineering has expanded rapidly in response to growing demand for materials that make improved use of existing resources or are needed for new technologies. The program at Northwestern is broad based, offering educational and research opportunities in polymer science, ceramics, metallurgy, surface science, biomaterials, nanomaterials, and electronic materials. Engineers, scientists, and technologists who work on these different materials all basically apply the same scientific principles governing the interrelation of processing, structure, properties, and material performance. A key theme of the Northwestern program is the integration of these principles in the systematic design of new materials.

Journal ArticleDOI
TL;DR: The gender balance differences between the fields of CS and IS are pursued, especially where CS has some acknowledged major problems attracting females into its field, and studies that seemed to confuse what IS is are found.
Abstract: INTRODUCTION In a recent column on the gender balance in the Information Systems (IS) and Computer Science (CS) fields, I mentioned what I called “a question for another time.” That question, you may or (probably) may not recall, was “How many people writing articles for the computing/information technology literature really understand what the field of IS is?” The reason I raised the question was that I had been pursuing the gender balance differences between the fields of CS and IS, especially where CS has some acknowledged major problems attracting females into its field, and I kept running across studies that seemed to confuse what IS is. For example, one study titled “Why Don’t More Women Major in Information Systems?” turned out, in spite of its title, to be about the computing field in general, not IS in particular (you could tell quite easily because of the way the article defined IS as computer science, computer engineering, and electrical engineering!). Now that really bugs me! I realize that I may be preaching to the choir here, because all of you readers of Information Systems Management are savvy enough to know what IS really is (after all, you’re professionally immersed in it!). Quite clearly, however, something is wrong in the field of IS when our collegial brethren don’t know what the heck we’re about. I might have let it all drift by and not bothered to revisit my question, except that something else happened. That something else was the announcement of something called “CSEdWeek,” a celebration of the teaching of CS, which came to pass the week containing December 9, 2011. In the announcement for the event, there came a need to define CS. And here’s what those folks said CS included:





Dissertation
01 Jan 2012
TL;DR: Function Shipping in a Scalable Parallel Programming Model is presented, which describes how functions are shipped between parallel computers and describes how the model changed over time to accommodate increasingly diverse programming models.
Abstract: Function Shipping in a Scalable Parallel Programming Model


Dissertation
01 Jan 2012
Abstract: Modeling Systems from Measurements of their Frequency Response

01 Jan 2012
TL;DR: Arteaga et al. as mentioned in this paper proposed a new composite variable model (CVM) to address the strategic TL relay network design (TLRND) problem, which captured operational considerations implicitly within the variable definition instead of adding them as constraints in their model.
Abstract: Driver turnover is a significant problem for full truckload (TL) carriers that operate using point-to-point (PtP) dispatching. The low quality of life of drivers due to the long periods of time they spend away from home is usually identified as one of the main reasons for the high turnover. In contrast, driver turnover is not as significant for less-than-truckload (LTL) carriers that use hub-and-spoke transportation networks which allow drivers to return home more frequently. Based on the differences between TL and LTL, the use of a relay network (RN) has been proposed as an alternative dispatching method for TL transportation in order to improve driver retention. In a RN, a truckload visits one or more relay points (RPs) where drivers and trailers are exchanged while the truckload continues its movement to the final destination. In this research, we propose a new composite variable model (CVM) to address the strategic TL relay network design (TLRND) problem. With this approach, we capture operational considerations implicitly within the variable definition instead of adding them as constraints in our model. Our composites represent feasible routes for the truckloads through the RN that satisfy limitations on circuity, number of RPs visited, and distances between RPs and between a RP and origin-destination nodes. Given a strict limitation on the number of RPs allowed to be visited, we developed a methodology to generate feasible routes using predefined templates. This methodology was preferred over an exact feasible path enumeration algorithm that was also developed to generate valid routes. The proposed approach was successfully used to obtain high quality solutions to largely-sized problem instances of TLRND. Furthermore extending the original CVM formulation, we incorporate mixed fleet dispatching decisions into the design of the RN. This alternative system allows routing some truckloads through the RN while the remaining truckloads are dispatched PtP. We analyze the performance of our models and the solutions obtained for TLRND problems through extensive computational testing. Finally, we conclude with a description of directions for future research. This dissertation is approved for recommendation to the Graduate Council. Dissertation Director: _________________________________________ Dr. Sarah E. Root Dissertation Committee: _________________________________________ Dr. Chase Rainwater _________________________________________ Dr. Ed Pohl _________________________________________ Dr. Scott J. Mason © 2012 by Hector Andres Vergara Arteaga All Rights Reserved DISSERTATION DUPLICATION RELEASE I hereby authorize the University of Arkansas Libraries to duplicate this thesis when needed for research and/or scholarship. Agreed ______________________________________ Hector Andres Vergara Arteaga Refused _____________________________________ Hector Andres Vergara Arteaga ACKNOWLEDGMENTS I would like to sincerely thank the people who have contributed to the completion of this dissertation and others who have made my experience in Arkansas an unforgettable one. First, I want to thank my advisor Dr. Sarah Root for providing guidance and support from the very first day. I greatly appreciate her involvement with my research and the long hours devoted to discussing the many ideas that appear in this dissertation. I am also extremely grateful for the coaching I received from her during my job search. Thank you very much for helping me to achieve even greater goals than those that I had set for myself before this journey started. I also want to thank my doctoral committee for showing genuine interest about the progress of my research, reading this document and sharing valuable comments about the completed work. In particular, I would like to acknowledge the help I received from Dr. Chase Rainwater whose modeling suggestions proved to be essential to improve the performance of my proposed approach for the research problem studied in this dissertation. Drs. Ed Pohl and Scott Mason are great role models of leadership and professionalism whom I hope to emulate in my future academic career. My profound gratitude to Gary Whicker and Douglas Mettenburg at J.B. Hunt Transportation Services for their willingness to discuss different aspects of my research and provide real world data for the test cases presented in this dissertation. In the same way, I would like to thank Sergio Maldonado for his hard work completing an empirical analysis of the factors that affect the problem studied in this research for his undergraduate honors thesis and for helping me to process the raw test case data. The insights discovered in his work and his assistance with data processing were really helpful in the process of completing this research

Journal ArticleDOI
TL;DR: When AI EDAM began, Rittel and Weber (1984) had recently described design problems as “wicked”—unique, unrepresentable, and boundless in scale—but the past 25 years have seen changes to the authors' ideas of computation that may yet help to tackle them.
Abstract: It has often been said of artificial intelligence (AI) that once an essential aspect of intelligence has been emulated, it ceases to remain AI. Logical reasoning, chess playing, and to some extent even natural language processing have over the years become simple stock algorithms, which are seemingly well understood. Our human ability for design seems resilient to this fate, and yet the difficulty of the tasks we face in engineering indicates the clear need for intelligent computation as an essential aid. The importance of a journal like AI EDAM should be evident in the range of factors that make design, analysis, and manufacturing increasingly complex. The more globally connected world of the present means that the task of design is more distributed, often between teams in different continents. Manufacturing similarly occurs in geographically distant locations, with first data and then physical components transported internationally. The demand for better performance and better degrees of optimization to save limited resources of energy or materials demands nonstandard solutions. The size and complexity of our products, including whole cities or urban regions designed in their entirety and constructed almost overnight, is unprecedented. All of these factors stretch the bounds of normal human intuition and traditional methods, so a need for AI in engineering is today perhaps greater than ever before. When AI EDAM began, Rittel and Weber (1984) had recently described design problems as “wicked”—unique, unrepresentable, and boundless in scale—but the past 25 years have seen changes to our ideas of computation that may yet help us to tackle them. This speculation is admittedly optimistic, but by way of suggesting some direction for the future of our field, here are three observations from its past. As the first volume of AI EDAM was going to press, Rodney Brooks suggested (although this was not published until 1991) that conventional models of complex events were inherently limited, and it is “better to use the world as its own model.” Clear representation, in many circles, would no longer be a primary focus of AI. Google is presently celebrating its 13th birthday, putting its origin at about the midpoint of AI EDAM’s history. Although it is obvious that our ability to search the Internet has changed the way we work, what is perhaps of even broader impact is the necessary development of new algorithms and techniques to deal with massive amounts of relatively unstructured data. Internet search developers are just not interested in anything less than data in the billions or trillions. This necessitates a change from simple representations and a change in how we make sense of information, as some meaning is extracted from all of the mess that exists out there. In the present, thanks to Moore’s law (1965), computers are 5000 times faster than 25 years ago, but even more crucial are structural changes in computing architecture. Graphics rendering in hardware, which are due mainly to the video games industry, exceeds what Moore’s law alone could provide for software rendering. The result is real time rendering that allows for immediate feedback when dealing with understanding of surface geometry, light, and so forth. This completely changes the nature of how the computer can be used in design by making results accessible to our intuition. What about fluid dynamics? What about pedestrian simulation and more complex human behavior via agent-based models? It is also relevant to the games industry that these should also become more immediate and familiar in the next few years, changing the nature of design yet again. These events are significant because engineering has too often been considered to trade in clear, standardized representations, to deal with well-structured problems, and to favor method over intuition. Yet where the design task is difficult and real intelligence and creativity are needed, this is not the case. The above changes to representation, algorithm, and hardware are perhaps only now becoming ubiquitous enough to make their full impact on engineering practice. However, if this results in a better understanding of how we can deal with our more difficult problems, the coming years of AI EDAM should be a very exciting period.

Journal ArticleDOI
01 Dec 2012-Ubiquity
TL;DR: Fifteen authors examine different aspects from what is science, to natural information processes, to new science-enabled approaches in STEM education.
Abstract: The recent interest in encouraging more middle and high school students to prepare for careers in science, technology, engineering, or mathematics (STEM) has rekindled the old debate about whether computer science is really science. It matters today because computing is such a central field, impacting so many other fields, and yet it is often excluded from high school curricula because it is not seen as a science. In this symposium, fifteen authors examine different aspects from what is science, to natural information processes, to new science-enabled approaches in STEM education.

01 Jan 2012
TL;DR: This dissertation presents the design and analysis of a real-time adaptive DVS architecture for paralleled Multi-Threshold NULL Convention Logic (MTNCL) systems and shows that energy-efficient systems with low area overhead can be created using this approach.
Abstract: Power has become a critical design parameter for digital CMOS integrated circuits. With performance still garnering much concern, a central idea has emerged: minimizing power consumption while maintaining performance. The use of dynamic voltage scaling (DVS) with parallelism has shown to be an effective way of saving power while maintaining performance. However, the potency of DVS and parallelism in traditional, clocked synchronous systems is limited because of the strict timing requirements such systems must comply with. Delayinsensitive (DI) asynchronous systems have the potential to benefit more from these techniques due to their flexible timing requirements and high modularity. This dissertation presents the design and analysis of a real-time adaptive DVS architecture for paralleled Multi-Threshold NULL Convention Logic (MTNCL) systems. Results show that energy-efficient systems with low area overhead can be created using this approach. This Dissertation is approved for recommendation to the Graduate Council. Dissertation Director: _______________________________________ Dr. Jia Di Dissertation Committee: _______________________________________ Dr. H. A. Mantooth _______________________________________ Dr. Scott C. Smith ______________________________________ Dr. James P. Parkerson DISSERTATION DUPLICATION RELEASE I hereby authorize the University of Arkansas Libraries to duplicate this Dissertation when needed for research and/or scholarship. Agreed __________________________________________ Brent Hollosi Refused __________________________________________ Brent Hollosi


01 Jan 2012
TL;DR: This paper presents a meta-analyses of the determinants of infectious disease outbreaks in eight operation theatres and shows clear patterns of infection that can be traced to neglected immune systems such as central nervous systems.
Abstract: 2012 International Transaction Journal of Engineering, Management, & Applied Sciences & Technologies.