scispace - formally typeset
Search or ask a question

Showing papers on "Suite published in 2018"


Posted Content
TL;DR: The DeepMind Control Suite is a set of continuous control tasks with a standardised structure and interpretable rewards, intended to serve as performance benchmarks for reinforcement learning agents.
Abstract: The DeepMind Control Suite is a set of continuous control tasks with a standardised structure and interpretable rewards, intended to serve as performance benchmarks for reinforcement learning agents. The tasks are written in Python and powered by the MuJoCo physics engine, making them easy to use and modify. We include benchmarks for several learning algorithms. The Control Suite is publicly available at this https URL . A video summary of all tasks is available at this http URL .

476 citations


Journal ArticleDOI
TL;DR: CCP4i2 is a graphical user interface to the CCP4 (Collaborative Computational Project, Number 4) software suite and a Python language framework for software automation.
Abstract: The CCP4 (Collaborative Computational Project, Number 4) software suite for macromolecular structure determination by X-ray crystallography groups brings together many programs and libraries that, by means of well established conventions, interoperate effectively without adhering to strict design guidelines. Because of this inherent flexibility, users are often presented with diverse, even divergent, choices for solving every type of problem. Recently, CCP4 introduced CCP4i2, a modern graphical interface designed to help structural biologists to navigate the process of structure determination, with an emphasis on pipelining and the streamlined presentation of results. In addition, CCP4i2 provides a framework for writing structure-solution scripts that can be built up incrementally to create increasingly automatic procedures.

340 citations


Journal ArticleDOI
TL;DR: The performance statistics demonstrate that the lion algorithm is equivalent to certain optimization algorithms, while outperforming majority of the optimization algorithms and the trade-off maintainability of the lion algorithms over the traditional algorithms.
Abstract: Nature-inspired optimization algorithms, especially evolutionary computation-based and swarm intelligence-based algorithms are being used to solve a variety of optimization problems. Motivated by the obligation of having optimization algorithms, a novel optimization algorithm based on a lion’s unique social behavior had been presented in our previous work. Territorial defense and territorial takeover were the two most popular lion’s social behaviors. This paper takes the algorithm forward on rigorous and diverse performance tests to demonstrate the versatility of the algorithm. Four different test suites are presented in this paper. The first two test suites are benchmark optimization problems. The first suite had comparison with published results of evolutionary and few renowned optimization algorithms, while the second suite leads to a comparative study with state-of-the-art optimization algorithms. The test suite 3 takes the large-scale optimization problems, whereas test suite 4 considers benchmark engineering problems. The performance statistics demonstrate that the lion algorithm is equivalent to certain optimization algorithms, while outperforming majority of the optimization algorithms. The results also demonstrate the trade-off maintainability of the lion algorithm over the traditional algorithms.

138 citations


Proceedings ArticleDOI
02 Apr 2018
TL;DR: This paper extensively characterize the SPEC CPU2017 applications with respect to several metrics, such as instruction mix, execution performance, branch and cache behaviors, and presents detailed analysis to enable researchers to intelligently choose a diverse subset of the CPU2017 suite that accurately represents the whole suite, in order to reduce simulation time.
Abstract: The Standard Performance Evaluation Corporation (SPEC) CPU benchmark suite is commonly used in computer architecture research and has evolved to keep up with system microarchitecture and compiler changes. The SPEC CPU2006 suite, which remained the state-of-the-art for 11 years was retired in 2017, and is being replaced with the new SPEC CPU2017 suite. The new suite is expected to become mainstream for simulation-based design and optimization research for next-generation processors, memory subsystems, and compilers. In this paper, we extensively characterize the SPEC CPU2017 applications with respect to several metrics, such as instruction mix, execution performance, branch and cache behaviors. We compare the CPU2017 and the CPU2006 suites to explore the workload similarities and differences. We also present detailed analysis to enable researchers to intelligently choose a diverse subset of the CPU2017 suite that accurately represents the whole suite, in order to reduce simulation time.

74 citations


Journal ArticleDOI
TL;DR: A suite of cognitive complexity metrics that can be used to evaluate OO software projects is presented, which includes method complexity, message complexity, attribute complexity, weighted class complexity, and code complexity.
Abstract: Object orientation has gained a wide adoption in the software development community. To this end, different metrics that can be utilized in measuring and improving the quality of object-oriented (OO) software have been proposed, by providing insight into the maintainability and reliability of the system. Some of these software metrics are based on cognitive weight and are referred to as cognitive complexity metrics. It is our objective in this paper to present a suite of cognitive complexity metrics that can be used to evaluate OO software projects. The present suite of metrics includes method complexity, message complexity, attribute complexity, weighted class complexity, and code complexity. The metrics suite was evaluated theoretically using measurement theory and Weyuker’s properties, practically using Kaner’s framework and empirically using thirty projects.

47 citations


Proceedings ArticleDOI
28 May 2018
TL;DR: The quality of microbenchmark suites is investigated with a focus on suitability to deliver quick performance feedback and CI integration, and a performance-test quality metric called the API benchmarking score (ABS), which represents a benchmark suite's ability to find slowdowns among a set of defined core API methods.
Abstract: Continuous integration (CI) emphasizes quick feedback to developers. This is at odds with current practice of performance testing, which predominantely focuses on long-running tests against entire systems in production-like environments. Alternatively, software microbenchmarking attempts to establish a performance baseline for small code fragments in short time. This paper investigates the quality of microbenchmark suites with a focus on suitability to deliver quick performance feedback and CI integration. We study ten open-source libraries written in Java and Go with benchmark suite sizes ranging from 16 to 983 tests, and runtimes between 11 minutes and 8.75 hours. We show that our study subjects include benchmarks with result variability of 50% or higher, indicating that not all benchmarks are useful for reliable discovery of slowdowns. We further artificially inject actual slowdowns into public API methods of the study subjects and test whether test suites are able to discover them. We introduce a performance-test quality metric called the API benchmarking score (ABS). ABS represents a benchmark suite's ability to find slowdowns among a set of defined core API methods. Resulting benchmarking scores (i.e., fraction of discovered slowdowns) vary between 10% and 100% for the study subjects. This paper's methodology and results can be used to (1) assess the quality of existing microbenchmark suites, (2) select a set of tests to be run as part of CI, and (3) suggest or generate benchmarks for currently untested parts of an API.

39 citations


Book ChapterDOI
01 Jan 2018
TL;DR: As you saw throughout this book, Office 365 provides a comprehensive suite of apps to foster collaboration in the workplace.
Abstract: As you saw throughout this book, Microsoft 365 provides a comprehensive suite of apps to foster collaboration in the workplace. Collaboration is not just working on a document together but sharing information, working on a team, and managing aspects and artifacts in the workplace.

33 citations


Posted Content
TL;DR: New data structures are defined along with new functions (verbs) to perform common operations to provide a cohesive framework for handling, exploring, and imputing missing values.
Abstract: Despite the large body of research on missing value distributions and imputation, there is comparatively little literature on how to make it easy to handle, explore, and impute missing values in data This paper addresses this gap The new methodology builds upon tidy data principles, with a goal to integrating missing value handling as an integral part of data analysis workflows New data structures are defined along with new functions (verbs) to perform common operations Together these provide a cohesive framework for handling, exploring, and imputing missing values These methods have been made available in the R package naniar

33 citations


Journal ArticleDOI
TL;DR: In tabular datasets, it is usually relatively easy to, at a glance, understand patterns of missing data of individual rows, columns, and entries, but it is far harder to see patterns in the missingness of data that extend between them.
Abstract: Algorithmic models and outputs are only as good as the data they are computed on. As the popular saying goes: garbage in, garbage out. In tabular datasets, it is usually relatively easy to, at a glance, understand patterns of missing data (or nullity) of individual rows, columns, and entries. However, it is far harder to see patterns in the missingness of data that extend between them. Understanding such patterns in data is beneficial, if not outright critical, to most applications.

31 citations


Journal ArticleDOI
TL;DR: A firm-level cloud computing readiness metrics suite is presented and its applicability for various cloud computing service types is assessed and how the application of the metrics suite supports organizational users of cloud computing is assessed.
Abstract: Recent research on cloud computing adoption suggests the lack of a deep understanding of its benefits by managers and organizations. We present a firm-level cloud computing readiness metrics suite and assess its applicability for various cloud computing service types. We propose four relevant categories for firm-level adoption readiness, including technology and performance, organization and strategy, economic and valuation, and regulatory and environmental dimensions. We further define sub-categories and measures for each. Our evidence of the appropriateness of the metrics suite is derived based on a series of empirical cases developed from our project work, which encompasses input from field interviews, business press sources, industry white papers, non-governmental organizations, and government agency sources. We also assess how the application of the metrics suite supports organizational users of cloud computing.

27 citations


Book ChapterDOI
01 Jan 2018
TL;DR: The new release of Tint 2.0 is presented, an open-source, fast and extendable Natural Language Processing suite for Italian based on Stanford CoreNLP that includes some improvements of the existing NLP modules, and a set of new text processing components for finegrained linguistic analysis that were not available so far.
Abstract: English. In this we paper present Tint 2.0, an open-source, fast and extendable Natural Language Processing suite for Italian based on Stanford CoreNLP. The new release includes some improvements of the existing NLP modules, and a set of new text processing components for finegrained linguistic analysis that were not available so far, including multi-word expression recognition, affix analysis, readability and classification of complex verb tenses. Italiano. In questo articolo presentiamo Tint 2.0, una collezione di moduli opensource veloci e personalizzabili per l’analisi automatica di testi in italiano basata su Stanford CoreNLP. La nuova versione comprende alcune migliorie relative ai moduli standard, e l’integrazione di componenti totalmente nuovi per l’analisi linguistica. Questi includono per esempio il riconoscimento di espressioni polirematiche, l’analisi degli affissi, il calcolo della leggibilità e il riconoscimento dei tempi

Journal ArticleDOI
TL;DR: An empirical study on the effects of using evolutionary algorithms to generate test suites, compared with generating test suites incrementally with random search suggests that, although evolutionary algorithms are more effective at covering complex branches, a random search may suffice to achieve high coverage of most object‐oriented classes.
Abstract: An important aim in software testing is constructing a test suite with high structural code coverage, that is, ensuring that most if not all of the code under test have been executed by the test cases comprising the test suite Several search‐based techniques have proved successful at automatically generating tests that achieve high coverage However, despite the well‐established arguments behind using evolutionary search algorithms (eg, genetic algorithms) in preference to random search, it remains an open question whether the benefits can actually be observed in practice when generating unit test suites for object‐oriented classes In this paper, we report an empirical study on the effects of using evolutionary algorithms (including a genetic algorithm and chemical reaction optimization) to generate test suites, compared with generating test suites incrementally with random search We apply the EVOSUITE unit test suite generator to 1000 classes randomly selected from the SF110 corpus of open‐source projects Surprisingly, the results show that the difference is much smaller than one might expect: While evolutionary search covers more branches of the type where standard fitness functions provide guidance, we observed that, in practice, the vast majority of branches do not provide any guidance to the search These results suggest that, although evolutionary algorithms are more effective at covering complex branches, a random search may suffice to achieve high coverage of most object‐oriented classes

Journal ArticleDOI
TL;DR: In this paper, a discrete choice model of product differentiation is developed to estimate correlation in consumer preferences across spreadsheets and word processors, and the authors examine the competitive effects of bundling in a simulated market setting of partial competition.
Abstract: Our paper examines the importance of office suites for the evolution of the PC office software market in the 1990s. We develop a discrete choice model of product differentiation that enables us to estimate correlation in consumer preferences across spreadsheets and word processors. Estimation confirms strong positive correlation of consumer values for spreadsheets and word processor products, a bonus value for suites, and an advantage for Microsoft products. We use the estimated demand model to simulate various ‘hypothetical’ market structures in order to shed light on the welfare and competitive effects of bundling in the office productivity software market. We examine the competitive effects of bundling in a simulated market setting of partial competition, in which Lotus sells only a spreadsheet and WordPerfect sells only a word processor, while Microsoft sells both components as well as a suite. Assuming the rivals remain active in the market, when the correlation is positive, the introduction of the suite is pro-competitive (i.e., beneficial for consumers) on balance. This is mainly because the suite bonus 'value' is much larger than the difference between the suite price and the sum of Microsoft’s component prices when Microsoft does not offer a suite. When there is strong positive correlation (as we find), there are many such consumers who purchase both components separately when suites are not available. All of these consumers 'switch' to the suite when it is introduced, and reap significant benefits. The simulations show that the introduction of Microsoft’s Office suite also expands the distribution of spreadsheets and word processors, and this is beneficial to consumers as well.

Proceedings ArticleDOI
01 Jul 2018
TL;DR: This paper analyzes certain aspects of the benchmark suite and the computational effort required to solve its problems and a subset of problems on which Genetic Programming performs poorly is identified.
Abstract: Program synthesis is a complex problem domain tackled by many communities via different methods. In the last few years, a lot of progress has been made with Genetic Programming (GP) on solving a variety of general program synthesis problems for which a benchmark suite has been introduced. While Genetic Programming is capable of finding correct solutions for many problems contained in a general program synthesis problems benchmark suite, the actual success rate per problem is low in most cases. In this paper, we analyse certain aspects of the benchmark suite and the computational effort required to solve its problems. A subset of problems on which GP performs poorly is identified. This subset is analysed to find measures to increase success rates for similar problems. The paper concludes with suggestions to refine performance on program synthesis problems.

Book ChapterDOI
05 Nov 2018
TL;DR: How the need of industry for model-based validation, performance evaluation and synthesis shaped the Uppaal Tool Suite and how the tool suite aided the use cases it was applied in is shown.
Abstract: In this paper we review how the Uppaal Tool Suite served in industrial projects and was both driven and improved by them throughout the last 20 years. We show how the need of industry for model-based validation, performance evaluation and synthesis shaped the tool suite and how the tool suite aided the use cases it was applied in. The paper highlights a number of selected cases, including success stories and pitfalls, and we discuss the important roles of both basic research and industrial projects.

Proceedings ArticleDOI
02 Apr 2018
TL;DR: The Data Integration Benchmarking Suite (DIBS) is introduced, a suite of applications that are representative of data integration workloads across many disciplines and a comprehensive characterization is applied to these applications to better understand the general behavior of data Integration tasks.
Abstract: As the generation of data becomes more prolific, the amount of time and resources necessary to perform analyses on these data increases. What is less understood, however, is the data preprocessing steps that must be applied before any meaningful analysis can begin. This problem of taking data in some initial form and transforming it into a desired one is known as data integration. Here, we introduce the Data Integration Benchmarking Suite (DIBS), a suite of applications that are representative of data integration workloads across many disciplines. We apply a comprehensive characterization to these applications to better understand the general behavior of data integration tasks. As a result of our benchmark suite and characterization methods, we offer insight regarding data integration tasks that will guide other researchers designing solutions in this area.

01 Jan 2018
TL;DR: Tynes et al. as discussed by the authors explored leadership mentoring strategies used to develop future C-suite executives in the waste industry, and four themes emerged from the study, including identifying talent, coaching, formal and informal strategies, and succession planning.
Abstract: Mentoring Strategies to Prevent Leadership Shortfalls Among C-Suite Executives by Vernon Walter Tynes MBA, University of Louisville, 1996 BSBA/BSHCA, Western Kentucky University, 1980 Doctoral Study Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Business Administration Walden University June 2018 Abstract Corporate organizations are facing a shortage of future senior management leaders. The purpose of this single case study was to explore leadership mentoring strategies used to develop future C-suite executives in the waste industry. Companies may improve business practices by mentoring future generations to understand corporate responsibilities and expectations. The target population came from a regional wasteCorporate organizations are facing a shortage of future senior management leaders. The purpose of this single case study was to explore leadership mentoring strategies used to develop future C-suite executives in the waste industry. Companies may improve business practices by mentoring future generations to understand corporate responsibilities and expectations. The target population came from a regional waste company located in central Florida. The study participants consisted of 3 C-suite executives of the company responsible for the management and mentoring of future Csuite executive mentees. The conceptual framework for this study was rooted in transformational leadership theory. Data were collected using semistructured face-toface interviews, along with supporting documentation provided by the C-suite executives, including the company succession plan. Through methodological triangulation, coding, and thematic analysis, 4 themes emerged that could help C-suite executives in the successful mentoring of future C-suite executives. The 4 themes that emerged from the study, were (1) C-suite executives use various strategies to identify talent, (2) C-suite executives use various mentoring and coaching strategies to develop future C-suite executives, (3) C-suite executives use formal and informal leadership strategies to mentor, and (4) succession planning is in place or planned. The implication for social change was improved mentoring strategies for future C-suite candidates. These strategies may transfer to industries that face generational mentoring issues and challenges, improving structural and managerial growth and stability, which will aid in providing community employment opportunities. Mentoring Strategies to Prevent Leadership Shortfalls Among C-Suite Executives by Vernon Walter Tynes MBA, University of Louisville, 1996 BSBA/BSHCA, Western Kentucky University, 1980 Doctoral Study Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Business Administration Walden University June 2018 Dedication I dedicate the culmination of this work to my wife, Kimberly C. Tynes, MBA. Without her support, I could not have accomplished the task. Further, I dedicate this work to my mentor, indeed the most amazing individual on the face of the earth: John J. Jennings. Without his support on levels too numerous to mention, both Kim and I would not be in our present state. John, I wish I had all the money in the world; if I gave it to you, it still would not repay you for what you have given to Kim and me, along with countless others you have truly blessed. Acknowledgments Dan Blewett, reference librarian: The only librarian who documents every call by sending me an e-mail. You spent many nights working with me, sometimes on his own time. Dan, without you, this document would not have been published. Friends for life. Dr. James Savard, chair, academic mentor, and friend: Without a doubt, the finest chair at Walden University. Dr. Savard demonstrated the patience of Job and the strength of Samson. I still marvel at the fact that he put up with me. Friends for life. Dr. Rocky Dwyer, academic mentor and friend: If I had listened to you when I was in your course, I would have graduated a year earlier. Even after I complete the doctoral program, I will continue to recommend you for Instructor of the Year, for you are the finest instructor at Walden University. Friends for life.


Proceedings ArticleDOI
01 Jul 2018
TL;DR: A computational benchmark suite is presented for quantifying the performance of modern RCS simulations that contains a set of scattering problems that are organized along six dimensions and range from basic to challenging.
Abstract: A computational benchmark suite is presented for quantifying the performance of modern RCS simulations. The suite contains a set of scattering problems that are organized along six dimensions and range from basic to challenging. It also includes reference solutions, performance metrics, and recommended studies that can be used to reveal the strengths and deficiencies of different simulation methods.

Report SeriesDOI
17 Apr 2018
TL;DR: The method and tool were developed to address problems in the traditional process of regression testing, such as lack of time to run a complete regression suite, failure to detect bugs in time, and tests that are repeatedly omitted.
Abstract: We propose a new method of determining an effective ordering of regression test cases, and describe its implementation as an automated tool called SuiteBuilder developed by Westermo Research and Development AB. The tool generates an efficient order to run the cases in an existing test suite by using expected or observed test duration and combining priorities of multiple factors associated with test cases, including previous fault detection success, interval since last executed, and modifications to the code tested. The method and tool were developed to address problems in the traditional process of regression testing, such as lack of time to run a complete regression suite, failure to detect bugs in time, and tests that are repeatedly omitted. The tool has been integrated into the existing nightly test framework for Westermo software that runs on large-scale data communication systems. In experimental evaluation of the tool, we found significant improvement in regression testing results. The re-ordered test suites finish within the available time, the majority of fault-detecting test cases are located in the first third of the suite, no important test case is omitted, and the necessity for manual work on the suites is greatly reduced.

Journal ArticleDOI
TL;DR: In the wake of the publication of In the Wake: On Blackness and Being by Christina Sharpe, this article, the limits of what freedom means for black people around the world are discussed.
Abstract: Abstract:This essay suggests that Christina Sharpe’s In the Wake: On Blackness and Being calls us to attend to the limits of what freedom means for black people around the world. By reading some recent conceptual claims in black studies alongside Sharpe’s work, it seeks to demonstrate a certain theoretical cul de sac in contemporary black studies theorizations that requires further thought if we are to diligently contemplate the stakes of freedom for black people globally.

Journal ArticleDOI
TL;DR: An online approach to the assessment and management of dynamic security of electric power systems (EPS) with the use of a streaming modification of the random forest algorithm to recognize dangerous modes of complex closed-loop EPS, preventing the risk of emergencies on early stages.
Abstract: We propose a suite of intelligent tools based on the integration of methods of agent modeling and machine learning for the improvement of protection systems and emergency automatics. We propose an online approach to the assessment and management of dynamic security of electric power systems (EPS) with the use of a streaming modification of the random forest algorithm. The suite allows to recognize dangerous modes of complex closed-loop EPS, preventing the risk of emergencies on early stages. We show results of experimental tests on IEEE test systems.

19 May 2018
TL;DR: It is proved that a training platform contributes to the intensification of the educational process when teaching foreign languages in higher school and activates the external motivation of students.
Abstract: The article presents technologies for creating interactive multimedia training materials using the example of iSpring programs for developing hybrid training courses used for contact and distance learning. It proves that a training platform contributes to the intensification of the educational process when teaching foreign languages in higher school and activates the external motivation of students. The significance of web technologies for the language educational space is noted

Proceedings ArticleDOI
28 Oct 2018
TL;DR: In this article, the authors describe an empirical, educational and technological research conducted within the context of Brazilian formal literacy education problems, which addresses the question of whether research-specific software for mobile devices could improve disadvantaged children reading related variables.
Abstract: This work describes an empirical, educational and technological research conducted within the context of Brazilian formal literacy education problems. The main purpose of the research was to address the question of whether research-specific software for mobile devices could improve disadvantaged children reading related variables. In this regard, zReader, a mobile game suite for reading and storytelling, was developed and validated by means of an experiment with children attending Second and Third grades in a Brazilian public school. Results include positive inferential findings for children’s reading skills improvement and book reading frequency.

Journal ArticleDOI
TL;DR: The benchmark suite extends the current state-of-the-art problems for deep-reinforcement learning by offering an infinite state and action space for multiple players in a non-zero-sum game environment of imperfect information and provides a model that can be characterized as both a credit assignment problem and an optimization problem.
Abstract: Recent developments in deep-reinforcement learning have yielded promising results in artificial games and test domains. To explore opportunities and evaluate the performance of these machine learning techniques, various benchmark suites are available, such as the Arcade Learning Environment, rllab, OpenAI Gym, and the StarCraft II Learning Environment. This set of benchmark suites is extended with the open business simulation model described here, which helps to promote the use of machine learning techniques as valueadding tools in the context of strategic decision making and economic model calibration and harmonization. The benchmark suite extends the current state-of-the-art problems for deep-reinforcement learning by offering an infinite state and action space for multiple players in a non-zero-sum game environment of imperfect information. It provides a model that can be characterized as both a credit assignment problem and an optimization problem. Experiments with this suite?s deep-reinforcement learning algorithms, which yield remarkable results for various artificial games, highlight that stylized market behavior can be replicated, but the infinite action space, simultaneous decision making, and imperfect information pose a computational challenge. With the directions provided, the benchmark suite can be used to explore new solutions in machine learning for strategic decision making and model calibration.

Journal Article
TL;DR: An analytical model based on simulation aiming at patient flow optimization in the surgical suite has been proposed and it is indicated that performing this scenario can decrease patients’ LOS in such a system to 22.15%.
Abstract: Surgical suits allocate a large amount of expenses to hospitals; on the other hand, they constitute a huge part of hospital revenues. Patient flow optimization in a surgical suite by omitting or reducing bottlenecks which cause loss of time is one of the key solutions in minimizing the patients’ length of stay[1] (LOS) in the system, lowering the expenses, increasing efficiency, and also enhancing patients’ satisfaction. In this paper, an analytical model based on simulation aiming at patient flow optimization in the surgical suite has been proposed. To achieve such a goal, first, modeling of patients' workflow was created by using discrete-event simulation. Afterward, improvement scenarios were applied in the simulated model of surgical suites. Among defined scenarios, the combination scenario consisting of the omission of the waiting time between the patients’ entrance to the surgical suite and beginning of the admission procedure, being on time for the first operation, and adding a resource to the resources of the transportation and recovery room, was chosen as the best scenario. The results of the simulation indicate that performing this scenario can decrease patients’ LOS in such a system to 22.15%.

Journal ArticleDOI
TL;DR: This suite has proven to bring higher quality documentation, data recording and retrieving information with un-parallel higher security, and is scalable to the size of laboratory by increasing server space with limited resources.
Abstract: Notebooks in research are an important part in tracking information of both wet and dry analytical data. Paper books or excel sheets remain the most popular and conventional ways of record keeping. To meet these demands, an open source electronic laboratory notebook is developed that can track users research need. Based on the user specifications, this suite is written in Hypertext Pre-processor (PHP) and based on a MySQL relational database. It enables the set-up of single or a multi-users access controls. A Linux server hosts the application and database. The hardware requirements of suite on the server side are moderate. Bounds and ranges have also been considered and need to be used according to the user instructions. The sharing can be limited to single individual or between research groups. Adequate server and database security measures and daily backups have been provided making its availability long-lasting and protecting data from aging damage. Important notable advantages of this system are that it runs entirely in the web browser with no client software need, industry standard server supporting major operating systems (Windows, Linux and OSX), and ability to upload and store external files. After testing and validating this suite on beta-users for 48 days, it has proven to bring higher quality documentation, data recording and retrieving information with un-parallel higher security. This suite is scalable to the size of laboratory by increasing server space with limited resources. Availability: The electronic notebook is hosted on personal linux server and can be accessed at: http://131.96.32.229/login-system/index.php Key words: Internet/Web-based learning, pedagogy, database, hypertext pre-processor (PHP), electronic laboratory notebook.


Journal ArticleDOI
TL;DR: The bcc mini-language and its suite of tools are described and they allow us to explain and exemplify theoretical and practical aspects, and to develop the capstone project of the course.
Abstract: Teaching compiler construction principles in one-semester introductory courses is a very important and complex topic in the computer sciences curriculum. Most of the books are devoted to developing toy, mini or classroom language, but it is almost impossible to cover all material in one-semester course. In this paper, the bcc mini-language and its suite of tools are described. They are completely implemented in Python by hand as command line applications. The bcc language is composed of independent programs for executing different phases of the compiler. The suite of applications is the cornerstone of a compiler construction course, and they allow us to explain and exemplify theoretical and practical aspects, and to develop the capstone project of the course.

01 Jan 2018
TL;DR: This PhD project explores the development of a comprehensive modular ontology IDE and tool suite that guides engineers through best practices and promote ontology design pattern discovery, sharing, and reuse.
Abstract: Published ontologies frequently fall short of their promises to enable knowledge sharing and reuse. This may be due to too strong or too weak ontological commitments; one way to prevent this is to engineer the ontology to be modular, thus allowing users to more easily adapt ontologies to their own individual use-cases. In order to enable this engineering paradigm, there is a distinct need for developing more supporting tools and infrastructure. This increased support can be immediately impactful in a number of ways: guides engineers through best practices and promote ontology design pattern discovery, sharing, and reuse. To meet these needs, this PhD project explores the development of a comprehensive modular ontology IDE and tool suite.