scispace - formally typeset
Search or ask a question

Showing papers in "Innovations in Systems and Software Engineering in 2019"


Journal ArticleDOI
TL;DR: The main purpose of this work is to increase the longevity of battery used in conventional hearing aids by designed using MEMS microphone and low-cost amplifier ICs with biasing components in the form of pocket-type (body-worn) hearing aid.
Abstract: In this paper, MEMS-based capacitive microphone and low-cost amplifier are designed for low-cost power-efficient hearing aid application. The developed microphone along with the associated circuitry is mounted on a common board in the form of pocket-type (body-worn) device. The designed microphone consists of a flexible circular silicon nitrite (Si3N4) diaphragm and a polysilicon-perforated back plate with air as dielectric between them. The incident acoustic waves on the sensor cause deflection of the diaphragm to alter the air gap between the perforated back plate (fixed electrode) and the diaphragm (moving plate) which causes a change in capacitance. The acoustic pressure applied to the microphone is from 0 to 100 Pa for an operating range of 100 Hz–10 kHz which corresponds to the audible frequency range in case of human beings. The main purpose of this work is to increase the longevity of battery used in conventional hearing aids. The designed MEMS microphone with Si3N4 diaphragm is capable of identifying acoustic frequencies (100 Hz to 10 kHz) which correspond to a specific change in absolute pressure from 0 to 100 Pa for 2-micron-thick diaphragm with a sensitivity of about 0.08676 mV/Pa. The design of the sensor and the characteristics analysis are performed in FEM-based simulation software, which are later validated in real time. The prototype is designed using MEMS microphone and low-cost amplifier ICs with biasing components in the form of pocket-type (body-worn) hearing aid. In order to study the performance of proposed device, three different market-available amplifiers with controllable gain are used. Finally, the performance of the hearing aid is studied through audio spectrogram analysis to choose the best-suited amplifier among the three.

21 citations


Journal ArticleDOI
TL;DR: This work has made a comprehensive study of such energy efficient integrated sensor-based system in order to achieve energy efficiency and to prolong network lifetime.
Abstract: Small-size sensor nodes are used as the basic component for collecting and sending the data or information in the ad hoc mode in wireless sensor network (WSN). This network is generally used to collect and process data from different regions where the movement of human is very rare. The sensor nodes are deployed in such a region for collecting data using ad hoc network where, at any time, the unusual situation may happen or there is no fixed network that can work positively and provide any transmission procedure. The location may be very remote or some disaster-prone area. In disaster-prone zone, after disaster, most often no fixed network remains alive. In that scenario, the ad hoc sensor network is one of the reliable sources for collecting and transmitting the data from that region. In this type of situation, sensor network can also be helpful for geo-informatic system. WSN can be used to handle the disaster management manually as well as through an automated system. The main problem for any activity using sensor node is that the nodes are very much battery hunger. An efficient power utilization is required for enhancing the network lifetime by reducing data traffic in the WSN. For this reason, some efficient intelligent software and hardware techniques are required to make the most efficient use of limited resources in terms of energy, computation and storage. One of the most suitable approaches is data aggregation protocol which can reduce the communication cost by extending the lifetime of sensor networks. The techniques can be implemented in different efficient manners, but all are not useful in same application scenarios. More specifically, data can be collected by dynamic approach using rendezvous point (RP), and for that purpose, intelligent neural network-based cluster formation techniques can be used and for fixing the targeted base station, the ant colony optimization algorithm can be used. In this work, we have made a comprehensive study of such energy efficient integrated sensor-based system in order to achieve energy efficiency and to prolong network lifetime.

21 citations


Journal ArticleDOI
TL;DR: An executable formal model of human attention and multitasking in Real-Time Maude is defined and model checking is applied to show that in some cases the cognitive load of the navigation system could cause the driver to keep the focus away from driving for too long, and that working memory overload and distraction may cause an air traffic controller or a medical operator to make critical mistakes.
Abstract: When a person is concurrently interacting with different systems, the amount of cognitive resources required (cognitive load) could be too high and might prevent some tasks from being completed. When such human multitasking involves safety-critical tasks, such as in an airplane, a spacecraft, or a car, failure to devote sufficient attention to the different tasks could have serious consequences. For example, using a GPS with high cognitive load while driving might take the attention away for too long from the safety-critical task of driving the car. To study this problem, we define an executable formal model of human attention and multitasking in Real-Time Maude. It includes a description of the human working memory and the cognitive processes involved in the interaction with a device. Our framework enables us to analyze human multitasking through simulation, reachability analysis, and LTL and timed CTL model checking, and we show how a number of prototypical multitasking problems can be analyzed in Real-Time Maude. We illustrate our modeling and analysis framework by studying: (i) the interaction with a GPS navigation system while driving, (ii) some typical scenarios involving human errors in air traffic control (ATC), and (iii) a medical operator setting multiple infusion pumps simultaneously. We apply model checking to show that in some cases the cognitive load of the navigation system could cause the driver to keep the focus away from driving for too long, and that working memory overload and distraction may cause an air traffic controller or a medical operator to make critical mistakes.

14 citations


Proceedings ArticleDOI
TL;DR: This paper focuses on the interaction of artificial intelligence, care providers and policymakers and analyze using systems thinking approach, their impact on clinical decision making.
Abstract: "What is a system?" Is one of those questions that is yet not clear to most individuals in this world. A system is an assemblage of interacting, interrelated and interdependent components forming a complex and integrated whole with an unambiguous and common goal. This paper emphasizes on the fact that all components of a complex system are interrelated and interdependent in some way and the behavior of that system depends on these interdependencies. A health care system as portrayed in this article is widespread and complex. This encompasses not only "hospitals" but also governing bodies like the FDA, technologies such as AI, biomedical devices, Cloud computing and many more. The interactions between all these components govern the behavior and existence of the overall healthcare system. In this paper, we focus on the interaction of artificial intelligence, care providers and policymakers and analyze using systems thinking approach, their impact on clinical decision making.

10 citations


Journal ArticleDOI
TL;DR: This paper extends PSPs by considering Boolean as well as atomic numerical assertions, an encoding from extended PSPs to LTL formulas is contributed, and an algorithm computing inconsistency explanations are presented, i.e., irreducible inconsistent subsets of the original set of requirements.
Abstract: Property specification patterns (PSPs) have been proposed to ease the formalization of requirements, yet enable automated verification thereof. In particular, the internal consistency of specifications written with PSPs can be checked automatically with the use of, for example, linear temporal logic (LTL) satisfiability solvers. However, for most practical applications, the expressiveness of PSPs is too restricted to enable writing useful requirement specifications, and proving that a set of requirements is inconsistent can be worthless unless a minimal set of conflicting requirements is extracted to help designers to correct a wrong specification. In this paper, we extend PSPs by considering Boolean as well as atomic numerical assertions, we contribute an encoding from extended PSPs to LTL formulas, and we present an algorithm computing inconsistency explanations, i.e., irreducible inconsistent subsets of the original set of requirements. Our extension enables us to reason about the internal consistency of functional requirements which would not be captured by basic PSPs. Experimental results demonstrate that our approach can check and explain (in)consistencies in specifications with nearly two thousand requirements generated using a probabilistic model, and that it enables effective handling of real-world case studies.

10 citations


Journal ArticleDOI
TL;DR: This paper presents two model-based testing frameworks that additionally cover the stochastic aspects in hard and soft real-time systems and highlights the trade-off of simple and efficient statistical evaluation for Markov automata versus precise and realistic modelling with Stochastic automata.
Abstract: Many systems are inherently stochastic: they interact with unpredictable environments or use randomised algorithms. Classical model-based testing is insufficient for such systems: it only covers functional correctness. In this paper, we present two model-based testing frameworks that additionally cover the stochastic aspects in hard and soft real-time systems. Using the theory of Markov automata and stochastic automata for specifications, test cases, and a formal notion of conformance, they provide clean mechanisms to represent underspecification, randomisation, and stochastic timing. Markov automata provide a simple memoryless model of time, while stochastic automata support arbitrary continuous and discrete probability distributions. We cleanly define the theoretical foundations, outline practical algorithms for statistical conformance checking, and evaluate both frameworks’ capabilities by testing timing aspects of the Bluetooth device discovery protocol. We highlight the trade-off of simple and efficient statistical evaluation for Markov automata versus precise and realistic modelling with stochastic automata.

10 citations


Journal ArticleDOI
TL;DR: The presented method is validated by using three case studies such as JRefactory, JUnit and Quaqua and is performed by using learning-based algorithms such as artificial neural network and logistic regression.
Abstract: Recognizing design patterns in source code helps in improving the aspect of reusability and maintainability that play an essential role during analysis and design phases of software development process. Software patterns provide design-level documents, which are applied for the recurring design issues. Analysis of design patterns is often carried out by using forward engineering as well as reverse engineering. In this study, a reverse engineering approach has been applied for recognizing design patterns. The study is comprised of two phases such as preparation of requisite dataset based on object-oriented software metrics and recognition of design patterns. The first phase, i.e., dataset preparation, is carried out by various object-oriented metrics. Design pattern recognition is performed by using learning-based algorithms such as artificial neural network and logistic regression. The presented method is validated by using three case studies such as JRefactory, JUnit and Quaqua.

9 citations


Proceedings ArticleDOI
TL;DR: The authors present an example of the benefits that using the model, rather than referring to the handbook alone, yield to process tailoring in terms of efficiency and overall system optimisation.
Abstract: The primary reference to modern systems engineering procedure is the International Council on Systems Engineering (INCOSE) Systems Engineering Handbook. The main problem of such handbook is that it has a very high degree of complexity and could appear fragmented as well as challenging to follow for readers, coupled with the isolation of the Input-Process-Output (IPO) diagrams that withhold an overall system vision of the discipline. The authors highlight the embedded complexity of systems engineering and in particular of the INCOSE Systems Engineering Handbook identifying process modelling as a viable and efficient solution. The authors propose IDEF0 as an ideal modelling technique due to the similarity of the IDEF0 diagrams with the IPO diagrams and the addition of the functional modelling hierarchical decomposition. Concluding, the authors present an example of the benefits that using the model, rather than referring to the handbook alone, yield to process tailoring in terms of efficiency and overall system optimisation.

9 citations


Journal ArticleDOI
TL;DR: The results revealed that there is a strong and positive correlation between code clone refactoring and reduction in the size of unit test cases and how code quality attributes that are related to testability of classes are significantly improved when clones are refactored.
Abstract: This paper aims at empirically measuring the effect of clone refactoring on the size of unit test cases in object-oriented software. We investigated various research questions related to the: (1) impact of clone refactoring on source code attributes (particularly size, complexity and coupling) that are related to testability of classes, (2) impact of clone refactoring on the size of unit test cases, (3) correlations between the variations observed after clone refactoring in both source code attributes and the size of unit test cases and (4) variations after clone refactoring in the source code attributes that are more associated with the size of unit test cases. We used different metrics to quantify the considered source code attributes and the size of unit test cases. To investigate the research questions, and develop predictive and explanatory models, we used various data analysis and modeling techniques, particularly linear regression analysis and five machine learning algorithms (C4.5, KNN, Naive Bayes, Random Forest and Support Vector Machine). We conducted an empirical study using data collected from two open-source Java software systems (ANT and ARCHIVA) that have been clone refactored. Overall, the paper contributions can be summarized as: (1) the results revealed that there is a strong and positive correlation between code clone refactoring and reduction in the size of unit test cases, (2) we showed how code quality attributes that are related to testability of classes are significantly improved when clones are refactored, (3) we observed that the size of unit test cases can be significantly reduced when clone refactoring is applied, and (4) complexity/size measures are commonly associated with the variations of the size of unit test cases when compared to coupling.

8 citations


Journal ArticleDOI
TL;DR: Results achieved from usage of SCS-RA in the development of a microsatellite control system for National Institute for Space Research showed a significant reduction of effort, benefits of interoperability, scalability, and sharing of ground resources.
Abstract: Software for satellite control systems (SCS) domain performs a relevant role in space systems, being responsible for ensuring the functioning of the satellites, from the orbit launch to the end of their lifetime. Systems in this domain are complex and are constantly evolving due to technological advancement of satellites, the significant increase in controlled satellites, and the interoperability among space organizations. However, in order to meet such complexity and such evolution, the architectures of these systems have been usually designed in an isolated way by each organization and hence may be prone to recurrent efforts and difficulties of interoperability. In parallel to this scenario, reference architecture, a special type of software architecture that aggregates knowledge of a specific domain, has performed an important role for the success in development, standardization, and evolution of systems in several domains. Nevertheless, the usage of reference architecture has not been explored in the SCS domain. Thus, this article presents a reference architecture for satellite control systems (SCS-RA). Results achieved from usage of SCS-RA in the development of a microsatellite control system for National Institute for Space Research showed a significant reduction of effort, benefits of interoperability, scalability, and sharing of ground resources.

8 citations


Proceedings ArticleDOI
TL;DR: The aim of this work is to develop a model to simulate the oil spill trajectories on the sea and to provide a useful tool for decision makers who have to manage oil spill emergency.
Abstract: Due to recent maritime accidents which involved hazardous material releases in the sea, increasing attention has been dedicated to the maritime risk assessment and to the protection of natural resources, especially in coastal zones. The aim of this work is to develop a model to simulate the oil spill trajectories on the sea and to provide a useful tool for decision makers who have to manage oil spill emergency. The proposed oil spill model, based on the Lagrangian approach, defines the behaviour of oil slick on the water surface in space and time. The dynamics of oil spreading on the sea surface is modelled taking into account actual values for wind speed and surface current intensity. The model has been applied to simulate the oil spill propagation in the real case of the collision off the coast of Saint-Tropez (France) occurred on October 07, 2018.

Proceedings ArticleDOI
TL;DR: The architecture process builds on the ARCADIA method, a systems engineering approach starting from the needs and desires obtained from the stakeholders until a logical architecture is a potential solution.
Abstract: Predictive maintenance is an important field of research to determine the exact moment to trigger maintenance actions. Despite the potential benefits of predictive maintenance in terms of maintenance cost reduction and safety improvement, its implementation faces many shortcomings. One of the main shortcomings is the lack of a systematic approach to developing predictive maintenance systems. Existing generic architectures like OSA-CBM remain insufficient to address all requirements for new systems. A systems engineering approach starting from the needs and desires obtained from the stakeholders until a logical architecture is a potential solution. Specific analysis on the needs and desires is used to elicit, classify and prioritize the requirements for an easier transition to designing the systems architecture. The architecture process builds on the ARCADIA method.

Journal ArticleDOI
TL;DR: The study shows timing features of primary keystroke dynamics incorporated with the traits, and the user identification accuracy can be gained up to 17%.
Abstract: Age-group, gender, handedness and number of hands used are the common personality traits of a typist, and identifying such traits can be a key in identifying the person in today’s fast world. This particular piece of work is the objective, i.e., an indicative pathway, toward that goal by monitoring and analyzing the way a user types on a touch screen of a smartphone. Study of such traits and analyzing the typing pattern on a conventional computer keyboard has been investigated well. But the conventional keyboard is being replaced with the advent of smartphones with a variety of features, low cost and portability. Therefore, identifying traits through the touch screen is more significant and might be notably beneficial for personal identity prediction and verification. In this paper, we discuss the data acquisition method, classification approach and the evaluation process which are found as more appropriate to discover the trait identities to be used in variety of Web-based applications specifically in the area of e-commerce, online examination, digital forensics, targeted advertisement, age-restricted access control, human–machine interaction, social networks, user identity verification akin to biometrics. Multiple machine learning (ML) methods were used to develop the model, and more suitable and practical evaluation test option—leave-one-user-out cross-validation—was used to check the validity of the proposed model. The efficacy of our approach is illustrated on the dataset collected in the Web-based environment from 92 volunteers. The probability of predicting a user with such traits has also been illustrated here. The study shows timing features of primary keystroke dynamics incorporated with the traits, and the user identification accuracy can be gained up to 17%.

Proceedings ArticleDOI
TL;DR: The presented hybrid characteristics model combines different variant management concepts such as the feature- oriented domain analysis (FODA) and orthogonal variability modeling and allows the model-based expression of variants for all perspectives within concept and development of E/E systems, reaching from functional aspects of the system to generic product characteristics.
Abstract: The upcoming trends of autonomous driving and electrification in the automotive sector lead to an increasing amount of electronic and software. Highly interconnected and cross-domain functionalities with sophisticated algorithms pose a significant challenge on the automotive systems engineering process, which needs to cope with the added complexity. Managing and handling the variants of different Electric/Electronic (E/E) systems throughout the entire development is one of the key factors to achieve traceability from requirements phase until production. In this paper, we propose an approach for variant management of E/E-systems within a model-based engineering process. The presented hybrid characteristics model combines different variant management concepts such as the feature- oriented domain analysis (FODA) and orthogonal variability modeling. It is divided into three submodels and allows the model-based expression of variants for all perspectives within concept and development of E/E systems, reaching from functional aspects of the system to generic product characteristics. Therefore, it enhances a consistent and formalized description of variants throughout the entire development process and supports the configuration of product variants in an early systems engineering phase. Additionally, configuration of derivatives and reuse of early system and conceptual design decision and models is enhanced. The approach was integrated within an E/E architecture development tool and is evaluated in productive use. A showcase for an exemplary system is demonstrated.

Proceedings ArticleDOI
TL;DR: New requirements from the VUCA world within the Lean Enterprise for the design of systems of objectives are derived and existing techniques for meeting these demands are examined.
Abstract: In an increasingly dynamic and complex environment, manufacturing enterprises have to reconcile business objectives across all business areas continuously in order to be able to respond to changes quickly and remain competitive. Hence, the objectives have to be derived consistently from the corporate strategy. Different methods that help coordinate objectives and general requirements for the design of systems of objectives exist. However, due to the volatile, uncertain, complex and ambiguous (VUCA) environment, further demands for the design and maintenance of systems of objectives emerge. Manufacturing enterprises face the challenge of adapting their systems of objectives to the ever-changing conditions in order to remain competitive. Therefore, the focus of this paper is to derive new requirements from the VUCA world within the Lean Enterprise for the design of systems of objectives and to examine existing techniques for meeting these demands.

Proceedings ArticleDOI
Tjerk Bijlsma, Bram van der Sanden, Yonghui Li, Rob Janssen1, Raymond Tinsel1 
TL;DR: A methodology to support decision making for evolutionary systems, to decide on the most suitable design for platooning designs from the automotive domain is presented.
Abstract: Design decisions are made in an early-design phase of system development. These decisions have a big impact on the resulting system design and realization. Making design decisions in this stage is a complex and risky task, because there are a lot of uncertainties regarding their impact on the system qualities. Moreover, in many cases, trade-offs have to be made when there are conflicting objectives. This paper presents a methodology to support decision making for evolutionary systems, to decide on the most suitable design. The focus is on the embedded part of a system. Information is structured with explicit relations, such that the realization of a design can be traced towards the concerns a stakeholder has for the system. This structure enables architects and designers to understand and reason about the impact of a design with a system-wide perspective. The method consists of a calibration step which imports a current design, a design exploration step in which designs are selected, and a decision step where the most suitable design is chosen. The methodology is demonstrated by exploring and revealing the trade-offs for platooning designs from the automotive domain.

Proceedings ArticleDOI
Eric B. Dano1
TL;DR: The Resilient System Engineering (RSE) process will be used to derive and identify resilience attributes for a Multi-UAV SoS architecture and a Time/Frequency Difference of Arrival (T/FDOA) based location algorithm to locate agents in distress is used.
Abstract: The use of Unmanned Aerial Vehicles (UAVs) in multi-mission Unmanned Aerial Systems (UASs) has grown exponentially in recent years. Multi-UAV System of Systems (SoS) have proliferated due to their abilities to work autonomously, to decentralize mission workload and to exhibit resiliency in the ability to retain mission functionality even if a UAV node or network connectivity is lost. This resiliency is not inherent, and must be architected into both the SoS and UAV node/payload, especially in the areas of communications, control, ad-hoc networking, and path planning. As an exemplar, the Resilient System Engineering (RSE) process will be used to derive and identify resilience attributes for a Multi-UAV SoS architecture. For this paper, the Multi-UAV SoS will be tasked for Search and Rescue (SAR) and use a Time/Frequency Difference of Arrival (T/FDOA) based location algorithm to locate agents in distress. A simulation of the SAR SoS, using Contract-Based Design (CBD) invariant contracts are used to quantify the intended resilience/emergence attributes of the Multi-UAV SAR SoS and the T/FDOA location approach.

Journal ArticleDOI
Alfons Laarman1
TL;DR: The theory shows that a set of n vectors with k slots can be compressed to a single slot plus O(log2(k) overhead, and it is analytically shown that this compression can be attained in practice, without compromising fast query times for state vector lookups.
Abstract: Efficiently deciding reachability for model checking problems requires storing the entire state space. We provide an information theoretical lower bound for these storage requirements using only the assumption of locality in the model checking input. The theory shows that a set of n vectors with k slots can be compressed to a single slot plus $$\mathcal O(\log _2(k))$$ overhead. Using a binary tree in combination with a compact hash table, we then analytically show that this compression can be attained in practice, without compromising fast query times for state vector lookups. Our implementation of this Compact Tree can compress $$n>2^{32}$$ state vectors of arbitrary length $$k \ll n$$ down to 32 bits per vector. This compression is lossless. Experiments with over 350 inputs in five different model checking formalisms confirm that the lower bound is reached in practice in a majority of cases, confirming the combinatorial nature of state spaces.

Journal ArticleDOI
TL;DR: A traceability approach that relates requirements and design artifacts modeled in UML through a semantic model which is an intelligent natural language processing technique that analyzes the semantics among the sentences, regardless of the language they are written with.
Abstract: One of the ultimate challenges in change impact analysis is traceability modeling between several software artifacts in the software life cycle. This paper proposes a traceability approach that relates requirements and design artifacts modeled in UML. Our method faces two essential challenges: the semantic ambiguities of requirement artifacts that could be written in different natural languages and the heterogeneity of the artifacts that have to be traced (textual description, UML diagrams, etc.). To face these challenges, our method determines the semantic relationships between the requirements modeled with the use case diagram and design modeled with the class and the sequence diagrams through a semantic model which is an intelligent natural language processing technique that analyzes the semantics among the sentences, regardless of the language they are written with. Thanks to the semantic model our approach compares similarities between words having the same role, which makes it more efficient than computing similarities between words of different kinds. The empirical investigation demonstrates the advantages of the semantic traceability using a semantic model compared to the use of an information retrieval technique.

Journal ArticleDOI
TL;DR: This survey paper discusses the occurring challenges and it’s possible solutions by considering the entities related to data services and will help the data scientist to understand the supporting parameters of data storage system for designing big data management system.
Abstract: In the recent era, data science plays an important role in the health-care domain to provide a cost-effective and better treatment procedure. To achieve this goal, the data management system has a huge contribution by controlling, arranging, storing and preprocessing a large volume of health dataset. Already there are a lot of investigation and designing of different approaches to support the big data applications in different domain. Still, management of big data is a challenging task for the data scientist due to the complex characteristics of data and demands of the application. In this survey paper, we discuss the occurring challenges and it’s possible solutions by considering the entities related to data services. It will help the data scientist to understand the supporting parameters of data storage system for designing big data management system.

Proceedings ArticleDOI
TL;DR: This paper model and describe the systems engineering process life cycle and in particular the V-model, and presents the first generation system PLE and second generation PLE processes.
Abstract: Systems engineering is an interdisciplinary approach to translating users' needs into the realization of a system, its architecture and design through an iterative process that results in an effective operational system. While systems engineering has focused on defining a systematic life cycle process to meet the quality requirements, reuse has largely been an implicit concern. Systematic and comprehensive reuse has been addressed in the product line engineering (PLE) process. In practice, systems engineering process and product line engineering have developed and evolved separately. In this paper we aim to define the system product line engineering process that integrates the PLE reuse process with the current systems engineering process. For this we first model and discuss the so-called first and second generation PLE processes. Subsequently, we model and describe the systems engineering process life cycle and in particular the V-model. Based on these two inputs we present the first generation system PLE and second generation system PLE. We discuss the integrated processes and the lessons learned.

Journal ArticleDOI
TL;DR: The efficiency of hierarchical classification strategy for fractal image coding that uses adaptive quadtree partitioning that forms two level hierarchical domain groups and ranges are matched with similar hierarchical domain class is discussed.
Abstract: Fractal-based image coding is one of the efficient methods for grayscale image since the reconstructed images are resolution independent and also has low reconstruction time. This paper discusses the efficiency of hierarchical classification strategy for fractal image coding that uses adaptive quadtree partitioning. The scheme forms two level hierarchical domain groups and ranges are matched with similar hierarchical domain class. The fractal image coding technique with hierarchical classification strategy is then modified also to improve the compression ratio by using an efficient loss-less coding scheme OLZW on fractal compressed image. A variant of OLZW, i.e., MOLZW is also applied for the same. These modified variants show their significant improvements in compression ratio without degradation of image quality.

Journal ArticleDOI
TL;DR: The proposed multi-dimensional encryption technique design is using two phases to perform the encryption, and its two phases are image pixel shuffling phase and image pixel rearrange phase.
Abstract: Image encryption is one of the techniques which is used to maintain the image confidentiality. Trust is needed to be created and retained in the cloud between the service provider and the end user. The existing image encryption methods are using a map structure, Rubik cube method, DCT-based approach and s-box designs. The existing algorithms are mostly designed with the concept of offline image encryption. While coming to the online encryption, the image encryption algorithms needed to be redefined with a lightweight approach with an improved security level or same security level of existing algorithms. These things are taken into consideration in the proposed multi-dimensional encryption technique design. The proposed technique is using two phases to perform the encryption. The proposed technique’s two phases are image pixel shuffling phase and image pixel rearrange phase. The proposed technique is already tested successfully on the standard grayscale images, and the results were obtained and it satisfied the objectives. In this paper, the proposed technique was tested on the standard color images and the gained results. Those results have been used to analyze the performance and efficiency of proposed technique with the existing technique, by using different parameters which include PSNR, MSE, information entropy, coefficient correlation, NPCR and UPCI.

Proceedings ArticleDOI
TL;DR: The method associated with the free SysML tool TTool is revisits in order to take network dimensioning into account in the early steps of the life cycle of distributed systems.
Abstract: Acceptance of the Systems Modeling Language (SysML) among system engineers heavily depends on the method and tool associated with the language. This particularly applies to a family of systems where increasing data exchanges between equipments create high requirements for the networks. The paper therefore revisits the method associated with the free SysML tool TTool in order to take network dimensioning into account in the early steps of the life cycle of distributed systems. TTool is interfaced with WoPANets, a tool based on network calculus theory. An AFDX network serves as case study.

Proceedings ArticleDOI
TL;DR: The paper concludes by considering next steps including the challenge of assuring that such a framework is being implemented effectively, and the feasibility of collecting sufficient empirical evidence to conduct cost-benefit analyses of the impact of complexity evaluation.
Abstract: System complexity, and its evaluation, poses several challenges to any organization hoping to engineer systems operating in a System-of-Systems (SoS) context. Here, we analyse one particular industrial complexity evaluation decision support tool that has been in use for several years across a variety of engineering projects, with the aim of better understanding and overcoming a particular subset of these challenges. While improvements to the tool itself (such as making SoS considerations explicit, or employing structured communications techniques to improve elicitation) are a legitimate line of enquiry, the focus of the current paper is the set of issues relating to the wider organizational context within which any such tool needs to be embedded.Here we characterise this context in terms of a complexity evaluation framework, and, based on the case study analysis, argue for a set of key framework features; collaborative effort towards building a shared understanding of contextually relevant complexity factors, iterative complexity (re-)evaluation over the course of a project, and progressive refinement of complexity evaluation tools and processes through linking these to project outcomes in the form of a wider organizational learning cycle. The paper concludes by considering next steps including the challenge of assuring that such a framework is being implemented effectively, and, relatedly, the feasibility of collecting sufficient empirical evidence to conduct cost-benefit analyses of the impact of complexity evaluation.

Proceedings ArticleDOI
TL;DR: This survey aims to identify exists tools and services in order to adapt them to the future challenges and suggest that the creation of limited number of fully integrated regulated markets is the most consensual solution to avoid the fragmentation in many different markets and products.
Abstract: In this work, a survey was carried out to identify the current European landscape from grid and market stakeholders’ point of view and to improve the coordination between the Transmission System Operators (TSOs) and Distribution System Operators (DSOs). The survey includes two parts, one dedicated to grid’s tools and services and other to the market’s tools and services. This survey aims to identify exists tools and services in order to adapt them to the future challenges. Based on the survey results, it was identified that major concerns of the energy stakeholders are related with the integration of different platforms used for different purposes (for example, operational planning and real time systems), and how to exchange information/data between parties in an expedited and standardized way. The survey participants see energy storage as the most essential technology in the future energy system, followed by the smart metering, online voltage regulation and demand response services. However, on the market landscape, survey results suggest that the creation of limited number of fully integrated regulated markets is the most consensual solution to avoid the fragmentation in many different markets and products.

Proceedings ArticleDOI
TL;DR: The capability of SoS explorer to generate, assess and select a SoS meta architecture for Intelligent Transportation System as the application domain is presented.
Abstract: Socio-Technical systems entail complex logic with many levels of reasoning. These systems are organized by web of connections which gives rise of system of systems (SoS) and can demonstrate self-driven capability. Non-linear relationship among the participating systems results emergent behavior which is not deterministic. Therefore, architecting a SoS with complex, dynamic and evolving systems is not trivial. The challenge is to create organized complexity that will allow individual system to achieve its goals that are dynamically changing. To address the challenge, SoS Explorer, a SoS architecting tool can be used to define, formulate, and solve numerous socio-technical problems. SoS explorer integrates fuzzy inference system with genetic algorithm guiding the optimization process in generating meta-architecture which provides the best possible value for overall objective of SoS. This paper presents the capability of SoS explorer to generate, assess and select a SoS meta architecture for Intelligent Transportation System as the application domain.

Proceedings ArticleDOI
TL;DR: Two promising unsupervised techniques which are One-Class SVM (OCSVM) and Isolation Forest (IF) are investigated, which optimize the separation between relevant/normal points and irrelevant/noisy points.
Abstract: 3D point cloud is increasingly getting attention for perceiving 3D environment which is needed in many emerging applications. This data structure is challenging due to its characteristics and the limitation of the acquisition step which adds a considerable amount of noise. Therefore, enhancing 3D point clouds is a very crucial and critical step. In this paper, we investigate two promising unsupervised techniques which are One-Class SVM (OCSVM) and Isolation Forest (IF). These two techniques optimize the separation between relevant/normal points and irrelevant/noisy points. For evaluation, three metrics are computed, which are the processing time, the number of detected noisy points, and Peak Signal-to-Noise (PSNR) in order to compare the both proposed techniques with one of the recommended filters in the literature which is Moving Least Square (MLS) filter. The obtained results reveal promising capability in terms of effectiveness. However, OCSVM technique suffers from high computational time; therefore, its efficiency is enhanced using modern Graphics Processing Unit (GPU) with an average rate improvement of 1.8.

Proceedings ArticleDOI
TL;DR: This paper provides a holistic view of the top production readiness challenges of the decentralized application for enterprise along with alternative solutions, and proposes a novel approach to identify challenges and rank them.
Abstract: Distributed Ledger Technology promises to solve multiple problems emergent of existing IT solutions and business processes on the intersection of trust, privacy, security, automation, authentication and authorization. However, multiple obstacles restrict the potential of the technology, specifically production-related challenges in the enterprise-centric ecosystem. The paper provides a holistic view of the top production readiness challenges of the decentralized application (dApp) for enterprise along with alternative solutions. We propose a novel approach to identify challenges and rank them. The identified challenges and possible solutions provide a baseline assessment. We present a novel, reusable, and extensible framework to help the system engineering teams to assess the challenges towards production readiness. We expect this study to benefit the DLT community to enhance the understanding of challenges and potential research areas for contribution.

Proceedings ArticleDOI
TL;DR: The goal of this work is to present the development of an agile tool aimed at ensuring the consistency of design choices with project requirements, propagating them through systems, subsystems and components with the associated technical specifications.
Abstract: Large experimental facilities always require the development or personalization of many non-conventional systems which will constitute a relevant part of the machine. For this reason in these cases an approach based on Systems Engineering is very useful to support the design choices given the complexity of the project and the many interconnections between different systems and subsystems.The case in analysis is the Divertor Tokamak Test (DTT), a nuclear fusion facility based in ENEA C.R. Frascati, Italy. The goal of this work is to present the development of an agile tool aimed at ensuring the consistency of design choices with project requirements, propagating them through systems, subsystems and components with the associated technical specifications. Additionally, a methodology for requirements verification in the construction phase is discussed.In the presentation of the methodology implemented, the practical application to three different situations within the DTT project development is discussed.