scispace - formally typeset
Search or ask a question

Showing papers on "Applied science published in 2017"


Book ChapterDOI
01 Jan 2017
TL;DR: Data science has been behind resolving some of the authors' most common daily tasks for several years and when you are looking for information on the web by using a search engine or asking your mobile phone for directions, you are interacting with data science products.
Abstract: You have, no doubt, already experienced data science in several forms. When you are looking for information on the web by using a search engine or asking your mobile phone for directions, you are interacting with data science products. Data science has been behind resolving some of our most common daily tasks for several years.

21 citations


Journal ArticleDOI
TL;DR: How mathematics and computer science have significantly impacted the development of Data Science approaches, tools, and how those approaches pose new questions that can drive new research areas within these core disciplines involving data analysis, machine learning, and visualization is highlighted.
Abstract: As an emergent field of inquiry, Data Science serves both the information technology world and the applied sciences. Data Science is a known term that tends to be synonymous with the term Big-Data; however, Data Science is the application of solutions found through mathematical and computational research while Big-Data Science describes problems concerning the analysis of data with respect to volume, variation, and velocity (3V). Even though there is not much developed in theory from a scientific perspective for Data Science, there is still great opportunity for tremendous growth. Data Science is proving to be of paramount importance to the IT industry due to the increased need for understanding the insurmountable amount of data being produced and in need of analysis. In short, data is everywhere with various formats. Scientists are currently using statistical and AI analysis techniques like machine learning methods to understand massive sets of data, and naturally, they attempt to find relationships among datasets. In the past 10 years, the development of software systems within the cloud computing paradigm using tools like Hadoop and Apache Spark have aided in making tremendous advances to Data Science as a discipline [Z. Sun, L. Sun and K. Strang, Big data analytics services for enhancing business intelligence, Journal of Computer Information Systems (2016), doi: 10.1080/08874417.2016.1220239]. These advances enabled both scientists and IT professionals to use cloud computing infrastructure to process petabytes of data on daily basis. This is especially true for large private companies such as Walmart, Nvidia, and Google. This paper seeks to address pragmatic ways of looking at how Data Science — with respect to Big-Data Science — is practiced in the modern world. We also examine how mathematics and computer science help shape Big-Data Science’s terrain. We will highlight how mathematics and computer science have significantly impacted the development of Data Science approaches, tools, and how those approaches pose new questions that can drive new research areas within these core disciplines involving data analysis, machine learning, and visualization.

13 citations


Dissertation
01 Jan 2017
TL;DR: This thesis will develop several artificial experimental model active matter systems that are able to effectively mimic and reproduce some of the rich emergent non-equilibrium behavior exhibited by several activematter systems or processes, like chemotaxis, in order to uncover the underlying physical phenomenon that govern this emergent behavior.
Abstract: Active matter systems have very recently received a great deal of interest due to their rich emergent non-equilibrium behavior. Some of the most vital and ubiquitous biological systems and processes are active matter systems including reproduction, wound healing, dynamical adaptation, chemotaxis, and cell differentiation. Active matter systems span multiple length scales from meter to nanometer and can vary depending on the shape of the active agent, mode of motility, and environment. However, active matter systems are unified in that they are all composed of active units or particles that continuously convert ambient, stored, or chemical energy locally into motion and exhibit emergent non-equilibrium collective dynamical or phase behavior. Active matter systems have been studied extensively in the biological context, as well as in simulation and theory. However, there are relatively few artificial or synthetic experimental model soft active matter systems that can effectively mimic the rich emergent behavior exhibited by many active matter systems. Such model experimental systems are crucial not only to confirm the exotic behavior predicted by theoretical and simulation systems, but to study and investigate the underlying physical phenomenon which may contribute to or even drive some emergent phenomenon. These model systems are crucial to help determine what behavior is due to purely physical phenomenon and what behavior requires some type of biological or biochemical stimuli. In this thesis, I will develop several artificial experimental model active matter systems that are able to effectively mimic and reproduce some of the rich emergent non-equilibrium behavior exhibited by several active matter systems or processes, like chemotaxis, in order to uncover the underlying physical phenomenon that govern this emergent behavior. I will start by designing an extremely simple active matter system composed of a single active unit and then build up in complexity by adding many active components, changing the mode of motility, and including passive components which may or may not be fixed. I will show in this thesis that this emergent behavior is guided by fundamental physical phenomenon like friction and the mechanical properties of the environment. The thesis divides this study into two Parts. In Part I, I will develop an artificial soft active matter system that is able to effectively perform chemotaxis in a non-equilibrium manner by leveraging the concept of effective friction. The active component in this system will be magnetic particles that are coated with a biological ligand or receptor and placed on a substrate with the corresponding ligand or receptor. A rotating magnetic field will be applied and the magnetic particle will proceed to rotate with the applied field and convert some of that rotational energy into translational energy due to the effective friction induced by the breaking of reversible bonds between the surface of the particle and the substrate. I will then create gradients in the density of such binding sites and by placing the magnetic particle on a stochastic, random walk the differences in effective friction will lead to directed motion or drift reminiscent of chemotaxis. I will show that this concept of sensing based on effective friction induced by a binding interaction is general and scales with the affinity of the interaction being investigated

12 citations


Book ChapterDOI
01 Jan 2017
TL;DR: In this paper, the nature and roles of models in science, and in science education are discussed, and it is argued that models and modelling are important in science teaching both because of the need to authentically reflect the importance of modelling in science itself, and because of pedagogic role of models.
Abstract: This chapter discusses the nature and roles of models in science, and in science education. It is argued that models and modelling are important in science teaching both because of the need to authentically reflect the importance of modelling in science itself, and because of the pedagogic role of models.

7 citations



Journal ArticleDOI
21 Jun 2017
TL;DR: Some general characteristics of the use of computational tools for the analysis of texts, and some applications in the areas of public communication of S&T and Science and Technology Studies (STS), also showing some of their limitations and pitfalls are shown.
Abstract: Thanks, on the one hand, to the extraordinary availability of colossal textual archives and, on the other hand, to advances in computational possibilities, today the social scientist has at their disposal an extraordinary laboratory, made of millions of interacting subjects and billions of texts. An unprecedented, yet challenging, opportunity for science. How to test, corroborate models? How to control, interpret and validate Big Data? What is the role of theory in the universe of patterns and statistical correlations? In this article, we will show some general characteristics of the use of computational tools for the analysis of texts, and some applications in the areas of public communication of S&T and Science and Technology Studies (STS), also showing some of their limitations and pitfalls. Abstract

6 citations


Journal ArticleDOI
TL;DR: This paper provides an historical overview of the theoretical antecedents leading to information theory, specifically those useful for understanding and teaching information science and systems.
Abstract: This paper provides an historical overview of the theoretical antecedents leading to information theory, specifically those useful for understanding and teaching information science and systems. Information may be discussed in a philosophical manner and at the same time be measureable. This notion of information can thus be the subject of philosophical discussion and the ideas about information may be applied rigorously to a wide range of practical applications where processes are studied. Theoretical Information Science and specific Information Science applications may provide tools by which questions such as “when and what information can be produced”? can be answered, as well as how to implement a system to serve a particular commercial or other specific function. These topics are discussed, and philosophical literature that might serve as a prerequisite to learning Information Science is suggested.

5 citations


18 Jun 2017
TL;DR: The mission of the BMEP is to provide excellent education and promote innovative research enabling students to apply knowledge and approaches from the biomedical and clinical sciences in conjunction with design and quantitative principles, methods, and tools from the engineering disciplines to address human health related challenges of high relevance to Lebanon, the Middle East, and beyond.
Abstract: The mission of the BMEP is to provide excellent education and promote innovative research enabling students to apply knowledge and approaches from the biomedical and clinical sciences in conjunction with design and quantitative principles, methods, and tools from the engineering disciplines, to address human health related challenges of high relevance to Lebanon, the Middle East, and beyond. The program prepares its students to be leaders in their chosen areas of specialization committed to lifelong learning, critical thinking, and intellectual honesty.

2 citations



Proceedings ArticleDOI
Varghese Panthalookaran1
01 Apr 2017
TL;DR: The department of Physics of Rajagiri School of Engineering & Technology, India has developed a unique approach to this problem based on ten years of experiments and fine-tuning, called “4H Approach”, which combines the faculties of Head, Heart, Hands and Habits to bring about a natural integration of science teaching into an engineering curriculum.
Abstract: Even as everybody agrees with the fact that the developments in mathematics and science would steer the course of engineering and technology, the question, “How to integrate science teaching into engineering curriculum?” remains an unsettled issue. The core competence of the faculty who teach basic science and mathematics to the students of engineering often remains a fundamental challenge. Either a teacher of engineering with shaking foundations in mathematics and science deals with the topic or a teacher of science or mathematics with not background information of engineering handles them. In either case, an issue with the core competence of the teacher is bound to occur. Is it possible to develop a novel approach to integrate science teaching into an engineering curriculum? The department of Physics of Rajagiri School of Engineering & Technology, India has developed a unique approach to this problem based on ten years of experiments and fine-tuning. The method is called “4H Approach”, which combines the faculties of Head, Heart, Hands and Habits (4Hs) to bring about a natural integration of science teaching into an engineering curriculum. Here head aspect stands for clear and distinct understanding of the fundamental concepts of science, heart aspect stands for appreciation of those concepts based on their historical and biographical significance, hands aspect stands for actual measurement of quantities related to those concepts in a laboratory and the habit aspect stands for development of skills and values associated with engineering career. The method is also aimed at nurturing in the aspirants of engineering profession an important outcome required by Accreditation Board for Engineering and Technology (ABET), namely, “Ability to apply mathematics, science and engineering principles.” Different programs and projects are developed to realize these goals related to a course in Engineering Physics. The feedback from students suggests that they are generally appreciative of these programs and it helps them to integrate their study of basic sciences into their engineering curriculum. The paper also charts the future course of development of the 4H Approach to excellently suit the emerging requirements on the next-generation engineers.

2 citations


DOI
01 Jan 2017
TL;DR: It is demonstrated how the applied sciences community can make a significant contribution in reducing the energy footprint of their computations.
Abstract: In this paper we propose a course of action towards a better understanding of energy consumption-related aspects in the development of scientific software as well as in the development and usage of ‘unconventional’compute hardware in applied sciences. We demonstrate how the applied sciences community can make a significant contribution in reducing the energy footprint of their computations.

Journal ArticleDOI
TL;DR: The approach to introductory data science instruction recently taken at the University of New Hampshire is described, which presents a unique opportunity to introduce broadly applicable fundamentals of computer science to a larger audience than is typically found in computer science courses and programs.
Abstract: Data science is an applied, interdisciplinary field that draws on technical skills at the intersection of computing, applied mathematics, and statistics. In addition to these technical skills, the successful data scientist is typically also an expert in an unrelated field where data analysis can be utilized. The interdisciplinary nature of data science presents a unique opportunity to introduce broadly applicable fundamentals of computer science to a larger audience than is typically found in our computer science courses and programs. This poster describes the approach to introductory data science instruction recently taken at the University of New Hampshire.

Book ChapterDOI
01 Jan 2017
TL;DR: This short chapter describes the relationship between computer science and creditions, and shows that any subfield of computer science that deals with data needs to make assumptions, and that these assumptions are nothing else than beliefs or creditions.
Abstract: This short chapter describes the relationship between computer science and creditions, and shows that any subfield of computer science that deals with data needs to make assumptions, and that these assumptions (we call them priors) are nothing else than beliefs or creditions. Two simple examples from computer vision and machine learning demonstrate the effect of priors. Finally, Bayesian inference is introduced as a principle method to handle priors in a rigorous manner.

Book
08 Mar 2017
TL;DR: The 6th International Conference on Machinery, Materials Science and Engineering Applications (MMSE 2016), held 28-30 October, 2016 in Wuhan, China as mentioned in this paper, provides a perfect platform for scientists and engineering researchers to exchange ideas, build cooperative relationships and discuss the latest scientific achievements.
Abstract: This conference proceeding contains papers presented at the 6th International Conference on Machinery, Materials Science and Engineering Applications (MMSE 2016), held 28-30 October, 2016 in Wuhan, China. The conference proceeding contributions cover a large number of topics, both theoretical and applied, including Material science, Electrical Engineering and Automation Control, Electronic Engineering, Applied Mechanics, Mechanical Engineering, Aerospace Science and Technology, Computer Science and Information technology and other related engineering topics. MMSE provides a perfect platform for scientists and engineering researchers to exchange ideas, build cooperative relationships and discuss the latest scientific achievements. MMSE will be of interest for academics and professionals working in a wide range of industrial, governmental and academic sectors, including Material Science, Electrical and Electronic Engineering, Information Technology and Telecommunications, Civil Engineering, Energy Production, Manufacturing, Mechanical Engineering, Nuclear Engineering, Transportation and Aerospace Science and Technology

Journal ArticleDOI
TL;DR: The status of computer science is determined as a formal field of science, which consists of theories, in each of which the following constructive conceptual transition is accomplished: the principles of algorithmic computations–programming paradigms–programs–running programs on computers.
Abstract: The basic terms of metascience and philosophy of science are considered. In their context, we give the definitions of science, a field of science, and two adjacent sciences in relation to computer science: mathematics and technology. We determine the status of computer science as a formal field of science, which consists of theories, in each of which the following constructive conceptual transition is accomplished: the principles of algorithmic computations–programming paradigms–programs–running programs on computers.

DOI
01 Jan 2017
TL;DR: This paper is supposed to be used as a contribution for the `Consultation on Cloud Computing Research Innovation Challenges for WP 2018-2020 as called for by the European Commission (DG CONNECT, unit `Cloud and software ́).
Abstract: This paper is supposed to be used as a contribution for the `Consultation on Cloud Computing Research Innovation Challenges for WP 2018-2020 ́ as called for by the European Commission (DG CONNECT, unit `Cloud and software ́). We propose to encourage and support fundamental interdisciplinary research for making the benefits generated by cloud computing accessible to the applied science community. Introduction: Why cloud computing and high performance computing are contradicting The basic idea of cloud computing (CC) is to abstract from an IT infrastructure including compute-, memory-, networkingand software resources by virtualization. These resources are made accessible to the user in a dynamic and adaptive way. The major resulting advantages compared to a specially tailored `in-house solution ́ can be found in a transparent and simple usage, enhanced flexibility due to scalability and adaptivity to a specific need and finally in the increased efficiency due to savings in energy and money spent. The latter is due to scaling effects, operational efficiency, consolidation of resources and reduction of risks. The application is literally independent from any (local) data and compute resources as these can be concentrated effectively. All together, these advantages may some day supersede the traditional local / regional data center approach which can be found on the level of modern universities and research centers. From the point of view of data center management and operations, CC leads to a higher occupancy and therefore efficiency: The inevitable granularity effects that occur with medium or large workloads can be tackled with a backfilling of many small jobs. In addition, due to the fact that a specific application run's need for resources may vary from time to time, left-over capacities can be provided in a profitable `pay per use ́ style. In High Performance Computing (HPC) on the other hand, virtualization and abstraction concepts contradict the usual approaches especially in the simulation of technical processes: Here, the focus is put on enhancing the performance of an application by explicitly optimize for a certain type of hardware. This requires an a priori knowledge of the hardware which usually is given by the fact, that universities and regional research facilities have their own local or regional compute centers with comparatively static hardware components. This point of view can in some cases generate several orders of magnitude of performance gains and we call this concept hardware-oriented numerics. This paradigm comprises the simultaneous optimization for hardware, numerical and energy efficiency on all levels of application development [1,2,3,4]. One effort in hardware-oriented numerics is to optimize code and develop or choose numerical methods with respect to a heterogeneous hardware ecosystem: Multicore CPUs are as straight-forward as hardware accelerators like GPUs, FPGAs, Xeon Phi processors and system on a chip designs such as ARM-based CPUs with integrated GPUs. In addition, there are non-uniform memory architectures on the device level as well as heterogeneous communication infrastructures on the cluster level. The usual design pattern however is to optimize code for a (single) given hardware configuration where the simulation code is then optimized in a comparatively expensive way due to this proximity to hardware details. This development process is therefore the complete opposite of relying on a virtualization approach. Today's scientific cloud computing is not feasible for numerical simulation Up to today all efforts to make use of CC techniques in the science community can be characterized by what we call scientific cloud computing (SCC), which basically has been very successful for a specific type of application: In the scope of Big Data often a direct projection of a problem to a bag of tasks programming model can be found. Also other problems that are constituted by smaller independent tasks, where the coupling and therefore communication is minimal or zero can be coped with easily in a cloud environment. In numerical simulation on the other hand a strong coupling of the very computationally intense subproblems is the standard case. This induces a comparatively high synchronization need, requiring low communication latencies. The execution models of CC are literally blind for this type of strong coupling because the virtualization shuts down any attempt to optimize inter process communication. We believe, that the development of numerical simulation software should be characterized by the synthesis of hardware, numerical, and energy efficiency. Hence for this type of application a CC concept which takes into account the heterogeneity of compute hardware would be most feasible: According to our vision in future scenarios the user of such codes might want to choose for run time optimization in different metrics: Flexibility in the selection in which way a specific run should be allocated to a certain type(s) of compute node(s) are required. This flexibility has not been accounted for in the development of numerical code frameworks yet. A direct result of the service providers internalizing the concept of hardware-oriented numerics would be that the user of the service would be able to make an a priori choice for the core requirements for the run. For instance it could be decided whether an allocation of hardware should be made in order to minimize wall clock time or minimize energy to solution. Other hardware specifics could be made allocateable such as the type and properties of the communication links between nodes. The service would then return a number of allocations based upon available hardware. After selection, a complex optimization problem then has to be solved: The simulation software has to be able to select numerical algorithmic components that fit to this allocation and finally, a load balancing has to be performed for the individual problem to be solved. Towards a numerical cloud computing In order to realize this vision, there are two fundamental problems to solve: (1) Specially tailored numerics as well as load balancing strategies as well as (2) mapping, scheduling and operation strategies for numerical simulation have to be developed. In (1) numerical components in a code framework have to be revisited or developed from scratch with respect to (2) by adjusting them to the respective strategies. Such numerical alternatives range from preconditioners in linear solvers to whole discretization approaches on the model level. Different hardware specific implementations have to be provided and tuned in order to enable the optimizer in (2) to succeed, which is closely related to performance engineering. This has to be undergone with respect to all levels of parallelism in modern hardware architectures and on all levels of an application. On the other hand, the systems / strategies developed in (2) have to be sensitive for the effects of specific numerics on specific hardware. This problem is often closely related to numerical scaling, convergence and complexity theories. These theories and related skills are usually not addressed as an integral part of the training in computer science or service providers / operators. Here an automatic tuning system has to be developed that is capable of deciding what type of numerics is to be used for a given hardware allocation and which parts of the data are distributed to which part of the hardware by a static or even dynamic load balancing. The latter is an even more complex problem keeping in mind the heterogeneity even within one specific allocation, where CPUs are for instance to be saturated alongside GPUs. This optimization problem is very similar to how compilers schedule instructions on the processor level. It is also multi-dimensional, as not only raw performance has to be optimized for but also energy to solution as stated in the previous section. Hence we emphasize that these two components, (1) and (2), cannot be brought up independently: Specialists from the domain of applied mathematics, performance engineers and application specialists are required for the former, whereas the latter is to be coped with by computer sciences and service providers / specialists.