scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Software Engineering & Applications in 2018"


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a customized quality model for smart government, which is based on the available software quality models, including the ISO 9126 quality model and the McCall's, Boeing, Dromey, FURPS, and others.
Abstract: Smart government is the next generation of the e-government that touching people closely in the perception of service quality. Although, the existing a varies of models that able to measure the level of normal quality, but there is a lack of the models that most needed for measuring the quality of Smart Government Services However, to build a smart government, it is crucial to take the quality into consideration. This paper aims o customized quality model for smart government. Building such quality model will be based on the available software quality models for smart government portals. To achieve the aims of the research, it was critical to analysis and obtain the intersection of the variable and sub variable form the key related models (McCall’s,Boehm, Dromey, FURPS and the ISO 9126 Quality Model. It will consist of the most appropriate and related quality characteristics and sub characteristics. The key finding has indicated the importance of conducting practical study for proposing novel model for these purposes

17 citations


Journal ArticleDOI
TL;DR: In this article, the authors present the most common indicators and evaluators to identify and evaluate the technical debt in software development, and present thirteen types of technical debt types, and used tools in selected studies to identify interest and future in technical debt.
Abstract: Context: Technical Debt (TD) is a metaphor that refers to short-term solutions in software development that may affect the cost to the software development life cycle. Objective: To explore and understand TDrelated to the software industry as well as an overview on the current state of TD research. Forty-three TD empirical studies were collected for classification and analyzation. Goals: Classify TD types, find the indicators used to detect TD, find the estimators used to quantify the TD, evaluate how researchers investigate TD. Method: By performing a systematic mapping study to identify and analyze the TD empirical studies which published between 2014 and 2017. Results: We present the most common indicators and evaluators to identify and evaluate the TD, and we gathered thirteen types of TD. We showed some ways to investigate the TD, and used tools in the selected studies. Conclusion: The outcome of our systematic mapping study can help researchers to identify interestand future in TD.

10 citations


Journal ArticleDOI
TL;DR: A new software change effort estimation model is proposed which helps software project manager in estimating the effort for software Requirement Changes during Software Development Phase and the overall Mean Magnitude Relative Error value produced by the new proposed model is under 25%.
Abstract: Software Requirements Changes is a typical phenomenon in any software development project. Restricting incoming changes might cause user dissatisfaction and allowing too many changes might cause delay in project delivery. Moreover, the acceptance or rejection of the change requests become challenging for software project managers when these changes are occurred in Software Development Phase. Where in Software Development Phase software artifacts are not in consistent state such as: some of the class artifacts are Fully Developed, some are Half Developed, some are Major Developed, some are Minor Developed and some are Not Developed yet. However, software effort estimation and change impact analysis are the two most common techniques which might help software project managers in accepting or rejecting change requests during Software Development Phase. The aim of this research is to develop a new software change effort estimation model which helps software project manager in estimating the effort for software Requirement Changes during Software Development Phase. Thus, this research has analyzed the existing effort estimation models and change impact analysis techniques for Softwrae Development Phase from the literature and proposed a new software change effort estimation model by combining change impact analysis technique with effort estimation model. Later, the new proposed model has been evaluated by selecting four small size software projects as case selections in applying experimental approch. The experiment results show that the overall Mean Magnitude Relative Error value produced by the new proposed model is under 25%. Hence it is concluded that the new proposed model is applicable in estimating the amount of effort for requirement changes during SDP.

9 citations


Journal ArticleDOI
TL;DR: The sentiments of game developers are examined to measure their guilt’s emotions when working in this career and results have shown that Support Vector Machine (SVM) approach is more accurate incomparison to Naive Bayes (NV) and Decision Tree.
Abstract: Game Development is one of the most important emerging fields in software engineering era. Game addiction is the nowadays disease which is combined with playing computer and videogames. Shame is a negative feeling about self evaluationas well as guilt that is considered as a negative evaluation of the transgressing behaviour, both are associated withadaptive and concealing responses. Sentiment analysis demonstrates a huge progression towards the understanding of web users’ opinions. In this paper, the sentiments of game developers are examined to measure their guilt’s emotions when working in this career. The sentiment analysis model is implementedthrough the following steps: sentiment collector, sentiment pre-processing, and then machine learning methods were used. The model classifies sentiments into guilt or no guilt and is trained with 1000 Reddit website sentiment. Results have shown that Support Vector Machine (SVM) approach is more accurate incomparison to Naive Bayes (NV) and Decision Tree.

9 citations


Journal ArticleDOI
TL;DR: This paper focuses on capturing the intelligent spywares capable to break through the new CAPTCHA trap IDS so as to gather information about it and necessary action can be taken against it.
Abstract: Intrusion Detection systems (IDS) are an essential element for Network Security Infrastructure and play an important role in detecting large number of attacks. Intrusion Prevention System (IPS) is a tool that is used to prevent spywares from getting intrusion into a system and one of the techniques used in IPS is Completely Automated Public Turning test to tell Computers and Human Apart (CAPTCHA). In order to detect illegal access of the web from the intruder, IDS, IPS can be implemented with the use of honeypot to track the IP address, location and country or region of the attacker in order to block the attacker from accessing the system. Different techniques have been adopted by different researchers using IDS, IPS and honeypot to protect their system against illegal attacks. As discovered in the existing systems CAPTCHA was not employed in IDS to detect spywares capable of breaking and having access to the system. To increase and maintain the security in a Network the combination of IDS with CAPTCHA, IPS and a dummy Honeypot can be employed. This work proposes a CAPTCHA –based Intrusion Detection Model with a redirector in order to identify the intelligent spywares that are capable of breaking CAPTCHA in IPS. Also using a dummy honeypot with circular hyperlinks so as to lewd the software that infiltrated the system in order to capture its IP address and other important information about the spywares such as the country or region it’s coming from, web browser used and date and time of intrusion so as to block and prevent illegal access by intruders. This paper focuses on capturing the intelligent spywares capable to break through the new CAPTCHA trap IDS so as to gather information about it and necessary action can be taken against it. A security model was designed having having CAPTCHA IDS with a redirector, IPS and a honeypot cable of detecting intrusion by intelligent spyware With this model the network will be more secured against intrusion by spywares.

9 citations


Journal ArticleDOI
TL;DR: In this paper, a textual feature modeling language based on Python programming language (PyFML) is proposed, which generalizes the classical feature models with instance feature cardinalities and attributes which be extended with highlight of replication and complex logical and mathematical cross-tree constraints.
Abstract: The Feature model is a typical approach to capture variability in a software product line design and implementation. For that, most works automate feature model using a limited graphical notation represented by propositional logic and implemented by Prolog or Java programming languages. These works do not properly combine the extensions of classical feature models and do not provide scalability to implement large size problem issues. In this work, we propose a textual feature modeling language based on Python programming language (PyFML), that generalizes the classical feature models with instance feature cardinalities and attributes which be extended with highlight of replication and complex logical and mathematical cross-tree constraints. textX Meta-language is used for building PyFML to describe and organize feature model dependencies, and PyConstraint Problem Solver is used to implement feature model variability and its constraints validation. The work provides a textual human-readable language to represent feature model and maps the feature model descriptions directly into the object-oriented representation to be used by Constraint Problem Solver for computation. Furthermore, the proposed PyFML makes the notation of feature modeling more expressive to deal with complex software product line representations and using PyConstraint Problem Solver

5 citations


Journal ArticleDOI
TL;DR: A 2-learner, ontology-based, pseudo-instances-enhanced approach, where two classifiers are trained to separately exploit two types of features, lexical features and features derived from a hand-built ontology, is investigated for software requirement engineering and evolution task.
Abstract: Software requirement engineering and evolution essential to software development process, which defines and elaborates what is to be built in a project. Requirements are mostly written in text and will later evolve to fine-grained and actionable artifacts with details about system configurations, technology stacks, etc. Tracing the evolution of requirements enables stakeholders to determine the origin of each requirement and understand how well the software’s design reflects to its requirements. Reckoning requirements traceability is not a trivial task, a machine learning approach is used to classify traceability between various associated requirements. In particular, a 2-learner, ontology-based, pseudo-instances-enhanced approach, where two classifiers are trained to separately exploit two types of features, lexical features and features derived from a hand-built ontology, is investigated for such task. The hand-built ontology is also leveraged to generate pseudo training instances to improve machine learning results. In comparison to a supervised baseline system that uses only lexical features, our approach yields a relative error reduction of 56.0%. Most interestingly, results do not deteriorate when the hand-built ontology is replaced with its automatically constructed counterpart.

3 citations



Journal ArticleDOI
TL;DR: This paper looks at the feasibility and potential advantages of employing an aspect orientation approach in the software development lifecycle to ensure efficient integration of security and proposes a model called the Aspect-Oriented Software Security Development Life Cycle (AOSSDLC), which covers arrange of security activities and deliverables for each development stage.
Abstract: In the past 10 years, the research community has produced a significant number of design notations to represent security properties and concepts in a design artifact. The need to improve the security of software has become a key issue for developers.The security function needs to be incorporated into the software development process at the requirement, analysis, design, and implementation stages as doing so may help to smooth integration and to protect systems from attack. Security affects all aspects ofa software program, which makes the incorporation of security features a crosscutting concern. Therefore, this paper looks at the feasibility and potential advantages of employing an aspect orientation approach in the software development lifecycle to ensure efficient integration of security.These notations are aimed at documenting and analyzing security in a software design model. It also proposes a model called the Aspect-Oriented Software Security Development Life Cycle (AOSSDLC), which covers arrange of security activities and deliverables for each development stage. It is concluded that aspect orientation is one of the best options available for installing security features not least because of the benefit that no changes need to be made to the existing software structure.

3 citations


Journal ArticleDOI
TL;DR: The present paper on three related issues and their integration Product lifecycle management, Enterprise Planning resources and Manufacturing execution systems, and how to integrate all these in a unified systems engineering framework.
Abstract: The present paper on three related issues and their integration Product lifecycle management , Enterprise Planning resources and Manufacturing execution systems. Our work is how to integrate all these in a unified systems engineering framework. As most company about two third claim to have integrate ERP to PLM, ; we still observe some related problems as also mentioned by Aberdeen group. In actual global data sharing, we have some options to also integrate systems best practices towards such objective. Such critical study come with solution by reverse engineering, revisiting requirement engineering steps and propose a validation and verification for the success factors of such integration.

3 citations


Journal ArticleDOI
TL;DR: This research work recommend the use of image steganography and RSA as digital signature to cloud service providers and users since it can secure major data types such as text, image, audio and video used in the cloud and consume less system resources.
Abstract: Cloud computing provides a lot of shareable resources payable on demand to the users. The drawback with cloud computing is the security challenges since the data in the cloud are managed by third party. Steganography and cryptography are some of the security measures applied in the cloud to secure user data. The objective of steganography is to hide the existence of communication from the unintended users whereas cryptography does provide security to user data to be transferred in the cloud. Since users pay for the services utilize in the cloud, the need to evaluate the performance of the algorithms used in the cloud to secure user data in order to know the resource consumed by such algorithms such as storage memory, network bandwidth, computing power, encryption and decryption time becomes imperative. In this work, we implemented and evaluated the performance of Text steganography and RSA algorithm and Image steganography and RSA as Digital signature considering four test cases. The simulation results show that, image steganography with RSA as digital signature performs better than text steganography and RSA algorithm. The performance differences between the two algorithms are 10.76, 9.93, 10.53 and 10.53 seconds for encryption time, 60.68, 40.94, 40.9, and 41.85 seconds for decryption time, 8.1, 10.92, 15.2 and 5.17 mb for memory used when hiding data, 5.3, 1.95 and 17.18 mb for memory used when extracting data, 0.93, 1.04, 1.36 and 3.76 mb for bandwidth used, 75.75, 36.2, 36.9 and 37.45 kwh for processing power used when hiding and extracting data respectively. Except in test case2 where Text steganography and RSA algorithm perform better than Image Steganography and RSA as Digital Signature in terms of memory used when extracting data with performance difference of -5.09 mb because of the bit size of the image data when extracted. This research work recommend the use of image steganography and RSA as digital signature to cloud service providers and users since it can secure major data types such as text, image, audio and video used in the cloud and consume less system resources.

Journal ArticleDOI
TL;DR: In this article, the importance of capturing the process metrics during the quality audit process and also attempts to categorize them based on the nature of error captured, steps for corrective actions are recommended.
Abstract: Software product quality can be defined as the features and characteristics of the product that meet the user needs. The quality of any software can be achieved by following a well defined software process. These software process results into various metrics like Project metrics, Product metrics and Process metrics. Software quality depends on the process which is carried out to design and develop software. Even though the process can be carried out with utmost care, still it can introduce some error and defects. Process metrics are very useful from management point of view. Process metrics can be used for improving the software development and maintenance process for defect removal and also for reducing the response time. This paper describes the importance of capturing the Process metrics during the quality audit process and also attempts to categorize them based on the nature of error captured. To reduce such errors and defects found, steps for corrective actions are recommended.

Journal ArticleDOI
TL;DR: It is found that Test-Driven Development is an efficient approach that improves the quality of the test code to cover more of the source code and also to reveal more faults based on mutation analysis.
Abstract: As software engineering methodologies continue to progress, new techniques for writing and developing software have developed. One of these methods is known as Test-Driven Development (TDD). In TDD, tests are written first, prior to code development. Due to the nature of TDD, no code should be written without first having a test to execute it. Thus, in terms of code coverage, the quality of test suites written using TDD should be high. However, it is unclear if test suites developed using TDD exhibit high quality in terms of other metrics such as mutation score. Moreover, the association between coverage and mutation score is little studied. In this work, we analyze applications written using TDD and more traditional techniques. Specifically, we demonstrate the quality of the associated test suites based on two quality metrics: 1) structure-based criterion represented by branch coverage and statement coverage, 2) fault-based criterion represented by mutation scores. We hypothesize that test suites with high test coverage will also have high mutation scores, and we especially intend to show this in the case of TDD applications. We found that Test-Driven Development is an efficient approach that improves the quality of the test code to cover more of the source code and also to reveal more faults based on mutation analysis.

Journal ArticleDOI
TL;DR: It is confirmed that software engineers alone could distinguish between a software problem and an infrastructure problem using the tool, and causes of a real authentication error were found.
Abstract: We propose a log-based analysis tool for evaluating web application computer system. A feature of the tool is an integration software log with infrastructure log. Software engineers alone can resolve system faults in the tool, even if the faults are complicated by both software problems and infrastructure problems. The tool consists of 5 steps: preparation software, preparation infrastructure, collecting logs, replaying the log data, and tracing the log data. The tool was applied to a simple web application system in a small-scale local area network. We confirmed usefulness of the tool when a software engineer detects faults of the system failures such as “404” and “no response” errors. In addition, the tool was partially applied to a real large-scale computer system with many web applications and large network environment. Using the replaying and the tracing in the tool, we found causes of a real authentication error. The causes were combined an infrastructure problem with a software problem. Even if the failure is caused by not only a software problem but also an infrastructure problem, we confirmed that software engineers alone could distinguish between a software problem and an infrastructure problem using the tool.

Journal ArticleDOI
TL;DR: This study aims to showcase the game analysis, design of concept of game, a precise form of game-level operation specification, and an operation schema declaratively describes the effects of a game operation.
Abstract: The Unified Modeling Language (UML) is a language for the specification, visualization, and documentation of object-oriented software systems. Existing UML diagrams can be used to conveniently model behavior, but these diagrams can be hardly used to model games. However, UML cannot describe in an explicit manner the games requirements needed for modeling Queens Challenge Game. In this study, the modeling of queens challenge puzzle from 1 to 25 levels is discussed, the proposed extension to UML covering the use case diagram, sequence diagram, activity diagram and class diagram aspects of proposed at the various views and diagrams of the UML. The use of queens challenge game is illustrated using a queens challenge puzzle from 1 to 25 levels example. The purpose of this section is describing the extensions made in each of the UML diagrams, to allow the explicit representation of the proposed system. First, Context Diagram for proposed N-queen game system is introduced. Then, the proposed modifications to the UML are mentioned. Therefore, this study is aimed to showcase the game analysis, design of concept of game, a precise form of game-level operation specification, and an operation schema declaratively describes the effects of a game operation by using case model, actors, use case, relationships between the actors, the use case, interaction between the prototype and its user, sequence diagram, activity diagram and class diagram of queens challenge puzzle from 1 to 25 levels as defined by the Unified Modeling Language (UML).

Journal ArticleDOI
TL;DR: The study concludes that the use of semi-structured data tools for accessing bioinformatics databases is a viable alternative to the structured tools, though each method is shown to have certain inherent advantages and disadvantages.
Abstract: There is a wide range of available biological databases developed by bioinformatics experts, employing different methods to extract biological data. In this paper, we investigate and evaluate the performance of some of these methods in terms of their ability to efficiently access bioinformatics databases using web-based interfaces. These methods retrieve bioinformatics information using structured and semi-structured data tools, which are able to retrieve data from remote database servers. This study distinguishes each of these approaches and contrasts these tools. We used Sequence Retrieval System (SRS) and Entrez search tools for structured data, while Perl and BioPerl search programs were used for semi-structured data to retrieve complex queries including a combination of text and numeric information. The study concludes that the use of semi-structured data tools for accessing bioinformatics databases is a viable alternative to the structured tools, though each method is shown to have certain inherent advantages and disadvantages.

Journal ArticleDOI
TL;DR: An analysis exploring the extent to which users' uploaded programs changed in frequency and complexity over time revealed a high rate of drop-out, a drop in the complexity of programs uploaded to the forum during the first two years after users’ first (respective) uploads of programs to the Forum, and a slow long-term upward trend in complexity.
Abstract: Engineers and others can learn to use programming environments such as LabVIEW via online resources, including the LabVIEW forum. However, an interesting challenge in such a diffuse and distributed learning environment is assessing to what extent engineers are increasing in programming skill. This paper presents an analysis exploring the extent to which users’ uploaded programs changed in frequency and complexity over time. This study revealed a high rate of drop-out, a drop in the complexity of programs uploaded to the forum during the first two years after users’ first (respective) uploads of programs to the forum, and a slow long-term upward trend in complexity. The results highlight the need for further research aimed at assessing and promoting online learning of programming.

Journal ArticleDOI
TL;DR: The proposed approach improves CP diagnosis and by consequence positively reflects physical therapy treatment and was evaluated by a real dataset consisted of 70 pre-diagnosed cases, where 84% correctly diagnosed.
Abstract: Cerebral Palsy (CP) is one of the most complicated disabilities which is a permanent motor disorder causing mental and physical disabilities. Different reports published by different health organizations asked for researches on CP disability in order to improve diagnosis. Globally, there are different researches conducted to improve CP diagnosis, but most of those studies do not diagnose CP in children’s early ages, which limit the treatment impact. This paper report on a research conducted to develop an ontology-based approach to diagnose children with CP in early ages. In this paper, Ontology was used to represent CP domain. Then, a set of manually built rules have been optimized through a knowledge-based survey to be used in CP diagnosis. The proposed approach improves CP diagnosis and by consequence positively reflects physical therapy treatment. The proposed approach was evaluated by a real dataset consisted of 70 pre-diagnosed cases, where 84% correctly diagnosed.

Journal ArticleDOI
TL;DR: Software testing is a critical phase in the software development lifecycle that is used to evaluate the software and there is a lack of well-defined guidelines or a methodology to direct the testers to perform tests.
Abstract: Software systems that meet the stakeholders needs and expectations is the ultimate objective of the software provider. Software testing is a critical phase in the software development lifecycle that is used to evaluate the software. Tests can be written by the testers or the automatic test generators in many different ways and with different goals. Yet, there is a lack of well-defined guidelines or a methodology to direct the testers to

Journal ArticleDOI
TL;DR: A preliminary treatment to the image is introduced in order to improve the quality, and a hardware implementation is presented to ensure reliable and fast authentication of people.
Abstract: Fingerprint recognition technology is becoming increasingly popular and widely used for many applications that require a high level of security. We can meet several types of sensors integrated in the fingerprint recognition system as well as several types of image processing algorithm in order to ensure reliable and fast authentication of people. Embedded systems have a wide variety and the choice of a welldesigned processor is one of the most important factors that directly affect the overall performance of the system. This paper introduces a preliminary treatment to the image in order to improve the quality, and then present a hardware implementation.

Journal ArticleDOI
TL;DR: This study has presented an approach to improve the requirements engineering phase of IOT applications development by using Object Oriented Analysis and Design Approach (OOADA) along with Constraints Story Card (CSC) templates.
Abstract: This Internet of things is one of the most trending technology with wide range of applications. Here we are going to focus on Medical and Healthcare applications of IOT. Generally such IOT applications are very complex comprising of many different modules. Thus a lot of care has to be taken during the requirement engineering of IOT applications. Requirement Engineering is a process of structuring all the requirements of the users. This is the base phase of software development which greatly affects the rest of the phases. Thus our best should be given in the engineering of requirements because if the effort goes down here, it will greatly affect the quality of the end product. In this study we have presented an approach to improve the requirements engineering phase of IOT applications development by using Object Oriented Analysis and Design Approach(OOADA) along with Constraints Story Card(CSC) templates.

Journal ArticleDOI
TL;DR: A tool, StaticMock, for creating mock objects in compiled languages using source-tosource compilation together with Aspect Oriented Programming to deliver a unique solution that does not rely on the previous, commonly used techniques.
Abstract: Mock object frameworks are very useful for creating unit tests. However, purely compiled languages lack robust frameworks for mock objects. The frameworks that do exist rely on inheritance, compiler directives, or linker manipulation. Such techniques limit the applicability of the existing frameworks, especially when dealing with legacy code. We present a tool, StaticMock, for creating mock objects in compiled languages. This tool uses source-tosource compilation together with Aspect Oriented Programming to deliver a unique solution that does not rely on the previous, commonly used techniques. We evaluate the compile-time and run-time overhead incurred by this tool, and we demonstrate the effectiveness of the tool by showing that it can be applied to new and existing code.


Journal ArticleDOI
TL;DR: The purpose of this research was to investigate to what extent CRESUS-T both aids communication in the development of a shared understanding and supports collaborative requirements elicitation to bring about organisational, and associated IT infrastructural, change.
Abstract: Communicating an organisation's requirements in a semantically consistent and understandable manner and then reflecting the potential impact of those requirements on the IT infrastructure presents a major challenge among stakeholders. Initial research findings indicate a desire among business executives for a tool that allows them to communicate organisational changes using natural language and a model of the IT infrastructure that supports those changes. Building on a detailed analysis and evaluation of these findings, the innovative CRESUS-T support tool was designed and implemented. The purpose of this research was to investigate to what extent CRESUS-T both aids communication in the development of a shared understanding and supports collaborative requirements elicitation to bring about organisational, and associated IT infrastructural, change. In order to determine the extent shared understanding was fostered, the support tool was evaluated in a case study of a business process for the roll out of the IT software image at a third level educational institution. Statistical analysis showed that the CRESUS-T support tool fostered shared understanding in the case study, through increased communication. Shared understanding is also manifested in the creation of two knowledge representation artefacts namely, a requirements model and the IT infrastructure model. The CRESUS-T support tool will be useful to requirements engineers and business analysts that have to gather requirements asynchronously.

Journal ArticleDOI
TL;DR: The details of setting up a jupyter hub environment on a server running CentOS 7 are described, and a discussion of lessons learned from using this system in data science classes are included.
Abstract: Jupyter notebooks, formerly known as iPython notebooks, are widely used for data analysis and other areas of scientific computing. Notebooks can contain formatted text, images, LaTeX formulas, as well as code that can be executed, edited and executed again. A jupyter hub is a multi-user server for jupyter notebooks, and setting up a jupyter hub is a complex endeavour that involves many steps. The instructions found online for setup often have to be customized for different operating systems, and there is not one source that covers all aspects of setup. This paper describes the details of setting up a jupyter hub environment on a server running CentOS 7, and includes a discussion of lessons learned from using this system in data science classes.

Journal ArticleDOI
TL;DR: The formal method employs an algorithm based on Concept Algebra and it is applied in a Scrum case study and the results are promising and they differ from the ones exist in the current EF related literature.
Abstract: Essence Framework (EF) aims at addressing the core problems of software engineering (SE) and its practices. As a relatively new framework, one important issue for EF has been mapping software practices to its conceptual domain. Although there are several works describing systematic procedures, a review of literature cannot suggest a study using a formal method. The study is conducted according to the guidelines of Design Science Research (DSR) Method. The research contribution is classified as an “application of a new solution (the formal method) to a new problem (mapping software practices to EF). The formal method employs an algorithm based on Concept Algebra and it is applied in a Scrum case study. The results are promising and they differ from the ones exist in the current EF related literature.