scispace - formally typeset
Search or ask a question

Showing papers in "Advances in Science, Technology and Engineering Systems Journal in 2017"


Journal ArticleDOI
TL;DR: This survey focused on analyzing the text mining studies related to Facebook and Twitter; the two dominant social media in the world, to describe how studies in social media have used text analytics and text mining techniques for the purpose of identifying the key themes in the data.
Abstract: Text mining has become one of the trendy fields that has been incorporated in several research fields such as computational linguistics, Information Retrieval (IR) and data mining Natural Language Processing (NLP) techniques were used to extract knowledge from the textual text that is written by human beings Text mining reads an unstructured form of data to provide meaningful information patterns in a shortest time period Social networking sites are a great source of communication as most of the people in today’s world use these sites in their daily lives to keep connected to each other It becomes a common practice to not write a sentence with correct grammar and spelling This practice may lead to different kinds of ambiguities like lexical, syntactic, and semantic and due to this type of unclear data, it is hard to find out the actual data order Accordingly, we are conducting an investigation with the aim of looking for different text mining methods to get various textual orders on social media websites This survey aims to describe how studies in social media have used text analytics and text mining techniques for the purpose of identifying the key themes in the data This survey focused on analyzing the text mining studies related to Facebook and Twitter; the two dominant social media in the world Results of this survey can serve as the baselines for future text mining research

158 citations


Journal ArticleDOI
TL;DR: A solution that focuses on detecting cyberbullying in Arabic content is displayed and assessed and a thorough survey for the previous work done in cyberbullies detection is presented.
Abstract: A R T I C L E I N F O A B S T R A C T Article history: Received: 12 November, 2017 Accepted: 03 December, 2017 Online: 23 December, 2017 With the abundance of Internet and electronic devices bullying has moved its place from schools and backyards into cyberspace; to be now known as Cyberbullying. Cyberbullying is affecting a lot of children around the world, especially Arab countries. Thus, concerns from cyberbullying are rising. A lot of research is ongoing with the purpose of diminishing cyberbullying. The current research efforts are focused around detection and mitigation of cyberbullying. Previously, researches dealt with the psychological effects of cyberbullying on the victim and the predator. A lot of research work proposed solutions for detecting cyberbullying in English language and a few more languages, but none till now covered cyberbullying in Arabic language. Several techniques contribute in cyberbullying detection, mainly Machine Learning (ML) and Natural Language Processing (NLP). This journal extends on a previous paper to elaborate on a solution for detecting and stopping cyberbullying. It first presents a thorough survey for the previous work done in cyberbullying detection. Then a solution that focuses on detecting cyberbullying in Arabic content is displayed and assessed.

60 citations


Journal ArticleDOI
TL;DR: A learning method based on a cost sensitive extension of Least Mean Square (LMS) algorithm that penalizes errors of different samples with different weights and some rules of thumb to determine those weights is proposed.
Abstract: A R T I C L E I N F O A B S T R A C T Article history: Received: 19 March, 2017 Accepted: 04 April, 2017 Online: 15 April, 2017 In general, the imbalanced dataset is a problem often found in health applications. In medical data classification, we often face the imbalanced number of data samples where at least one of the classes constitutes only a very small minority of the data. In the same time, it represent a difficult problem in most of machine learning algorithms. There have been many works dealing with classification of imbalanced dataset. In this paper, we proposed a learning method based on a cost sensitive extension of Least Mean Square (LMS) algorithm that penalizes errors of different samples with different weights and some rules of thumb to determine those weights. After the balancing phase, we apply the different techniques (Support Vector Machine [SVM], KNearest Neighbor [K-NN] and Multilayer perceptron [MLP]) for the balanced datasets. We have also compared the obtained results before and after balancing method. We have obtained best results compared to literature with a classification accuracy of 100%.

30 citations


Journal ArticleDOI
TL;DR: This manuscript presents a prototype and design implementation of an advance home automation system that uses Wi-Fi technology as a network infrastructure connecting its parts to improve the flexibility and scalability of the commercially available home automation systems.
Abstract: Article history: Received: 15 December, 2016 Accepted: 20 January, 2017 Online: 28 January, 2017 This manuscript presents a prototype and design implementation of an advance home automation system that uses Wi-Fi technology as a network infrastructure connecting its parts. The proposed system consists of two main components; the first part is the server, which presents system core that manages and controls user’s home. Users and system administrator can locally (Local Area Network) or remotely (internet) manage and control the system. Second part is the hardware interface module, which provides appropriate interface to sensors and actuator of home automation system. Unlike most of the available home automation system in the market, the proposed system is scalable that one server can manage many hardware interface modules as long as it exists within network coverage. System supports a wide range of home automation devices like appliances, power management components, and security components. The proposed system is better in terms of the flexibility and scalability than the commercially available home automation systems.

28 citations


Journal ArticleDOI
TL;DR: This work discuss aspects related to the use of augmented reality and interaction design as a tool for teaching anatomy and knowledge discovery, with the proposition of an case study based on mobile application that can display targeted anatomical parts in high resolution and with detail of its parts.
Abstract: A R T I C L E I N F O A B S T R A C T Article history: Received: 05 April, 2017 Accepted: 02 May, 2017 Online: 24 May, 2017 The evolution of technology has changed the face of education, especially when combined with appropriate pedagogical bases. This combination has created innovation opportunities in order to add quality to teaching through new perspectives for traditional methods applied in the classroom. In the Health field, particularly, augmented reality and interaction design techniques can assist the teacher in the exposition of theoretical concepts and/or concepts that need of training at specific medical procedures. Besides, visualization and interaction with Health data, from different sources and in different formats, helps to identify hidden patterns or anomalies, increases the flexibility in the search for certain values, allows the comparison of different units to obtain relative difference in quantities, provides human interaction in real time, etc. At this point, it is noted that the use of interactive visualization techniques such as augmented reality and virtual can collaborate with the process of knowledge discovery in medical and biomedical databases. This work discuss aspects related to the use of augmented reality and interaction design as a tool for teaching anatomy and knowledge discovery, with the proposition of an case study based on mobile application that can display targeted anatomical parts in high resolution and with detail of its parts.

21 citations


Journal ArticleDOI
TL;DR: The instantiation of the Model of Adaptation of Learning Objects (MALO) developed in previous works is presented, using the competencies to be developed in a given educational context.
Abstract: A R T I C L E I N F O A B S T R A C T Article history: Received: 25 March, 2017 Accepted: 04 May, 2017 Online: 17 May, 2017 This article presents the instantiation of the Model of Adaptation of Learning Objects (MALO) developed in previous works, using the competencies to be developed in a given educational context. MALO has been developed for virtual environments based on an extension of the LOM standard. The model specifies modularly and independently two categories of rules, of adaptation and conversion, giving it versatility and flexibility to perform different types of adaptation to the learning objects, incorporating or removing rules in each category. In this work, we instance these rules of MALO using the competencies considered in a given educational context.

19 citations



Journal ArticleDOI
TL;DR: This work presents an alternative approach based on an embedded system to acquire the position-related variables and machine learning techniques, namely dimensionality reduction (DR) and classification.
Abstract: A R T I C L E I N F O A B S T R A C T Article history: Received: 24 March, 2017 Accepted: 19 May, 2017 Online: 16 June, 2017 The analysis of human sit down position is a research area allows for preventing health physical problems in the back. Many works have proposed systems that detect the sitting position, some open issues are still to be dealt with, such as: Cost, computational load, accuracy, portability, and among others. In this work, we present an alternative approach based on an embedded system to acquire the position-related variables and machine learning techniques, namely dimensionality reduction (DR) and classification. Since the information acquired by sensors is high-dimensional and therefore it might not be saved into embedded system memory, for this reason the system has a DR stage based on principal component analysis (PCA) is performed. Subsequently, the posed detection is carried out by the k-nearest neighbors (KNN) classifier between the matrix stored in the system and new data acquired by pressure and distance sensors. Thus, regarding using the whole data set, the computational cost is decreased by 33 % as well as the data reading is reduced by 10 ms. Then, sitting-pose detection task takes 26 ms, and reaches 75% of accuracy in a 4trial experiment.

17 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed a method of evaluating the mental health condition of a person based on the sound of their voice, which can be used continually through a smartphone call.
Abstract: A R T I C L E I N F O A B S T R A C T Article history: Received: 06 April, 2017 Accepted: 28 April, 2017 Online: 17 May, 2017 We have been developing a method of evaluating the mental health condition of a person based on the sound of their voice. Currently, we have applied this technology to create a smartphone application that shows the vitality and the mental activity as mental health condition indices. Using voice to measure one’s mental health condition is a non-invasive method. Moreover, this application can be used continually through a smartphone call. Unlike a periodic checkup every year, it could be used for monitoring on a daily basis. The purpose of this study is to compare the vitality index to the widely used Beck depression inventory (BDI) and to evaluate its validity. This experiment was conducted at the Center of Innovation Program of the University of Tokyo with 50 employees of one corporation as participants between early December 2015 and early February 2016. Each participant was given a smartphone with our application that recorded his/her voice automatically during calls. In addition, the participants had to read and record a fixed phrase daily. The BDI test was conducted at the beginning of the experimental period. The vitality index was calculated based on the voice data collected during the first two weeks of the experiment and was considered as the vitality index at the time when the BDI test was conducted. When the vitality and the mental activity indicators were compared to BDI score, we found that there was a negative correlation between the BDI score and these indices. Additionally, these indices were a useful method to discriminate a participant of high risk of disease with a high BDI score. And the mental activity index shows a higher performance than the vitality index.

16 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed EEG Mindcontrolled Arm is a promising alternative for current solutions that require invasive and expensive surgical procedures.
Abstract: A R T I C L E I N F O A B S T R A C T Article history: Received: 05 April, 2017 Accepted: 11 June, 2017 Online: 26 June, 2017 Recently, the field of prosthetics has seen many accomplishments especially with the integration of technological advancements. In this paper, different arm types (robotic, surgical, bionic, prosthetic and static) are analyzed in terms of resistance, usage, flexibility, cost and potential. Most of these techniques have some problems; they are extremely expensive, hard to install and maintain and may require surgery. Therefore, our work introduces the initial design of an EEG mind controlled smart prosthetic arm. The arm is controlled by the brain commands, obtained from an electroencephalography (EEG) headset, and equipped with a network of smart sensors and actuators that give the patient intelligent feedback about the surrounding environment and the object in contact. This network provides the arm with normal hand functionality, smart reflexes and smooth movements. Various types of sensors are used including temperature, pressure, ultrasonic proximity sensors, accelerometers, potentiometers, strain gauges and gyroscopes. The arm is completely 3D printed built from various lightweight and high strength materials that can handle high impacts and fragile elements as well. Our project requires the use of nine servomotors installed at different places in the arm. Therefore, the static and dynamic modes of servomotors are analyzed. The total cost of the project is estimated to be relatively cheap compared to other previously built arms. Many scenarios are analyzed corresponding to the actions that the prosthetic arm can perform, and an algorithm is created to match these scenarios. Experimental results show that the proposed EEG Mindcontrolled Arm is a promising alternative for current solutions that require invasive and expensive surgical procedures.

16 citations


Journal ArticleDOI
TL;DR: A prototype Head Up Display interface is presented which acts as an interactive infotainment system for rear seat younger passengers, aiming to minimize driver distraction and employs an Augmented Reality medium that utilizes the external scenery as a background for two platform games explicitly designed for this system.
Abstract: A R T I C L E I N F O A B S T R A C T Article history: Received : 06 April, 2017 Accepted : 15 May, 2017 Online: 04 June, 2017 The paper presents a prototype Head Up Display interface which acts as an interactive infotainment system for rear seat younger passengers, aiming to minimize driver distraction. The interface employs an Augmented Reality medium that utilizes the external scenery as a background for two platform games explicitly designed for this system. Additionally, the system provides AR embedded information on major en route landmarks, navigational data, and local news amongst other infotainment options. The proposed design is applied in the peripheral windscreens with the use of a novel Head-Up Display system. The system evaluation by twenty users offered promising results discussed in the paper.

Journal ArticleDOI
TL;DR: This work presents an innovative approach to address some of the challenges that currently hinder data center management, and explains how monitoring and management systems should be envisioned and implemented.
Abstract: Article history: Received: 30 May, 2017 Accepted: 09 August, 2017 Online: 21 August, 2017 Recent standards, legislation, and best practices point to data center infrastructure management systems to control and monitor data center performance. This work presents an innovative approach to address some of the challenges that currently hinder data center management. It explains how monitoring and management systems should be envisioned and implemented. Key parameters associated with data center infrastructure and information technology equipment can be monitored in real-time across an entire facility using low-cost, low-power wireless sensors. Given the data centers’ mission critical nature, the system must be reliable and deployable through a non-invasive process. The need for the monitoring system is also presented through a feedback control systems perspective, which allows higher levels of automation. The data center monitoring and management system enables data gathering, analysis, and decision-making to improve performance, and to enhance asset utilization.

Journal ArticleDOI
TL;DR: In this article, the authors modified the key on Vigenere cipher, so when the key length smaller than the length of plaintext entered, the key will be generated by a process, so the next key character will be different from the previous key character.
Abstract: Vigenere Cipher is one of the classic cryptographic algorithms and included into symmetric key cryptography algorithm, where to encryption and decryption process use the same key. Vigenere Cipher has the disadvantage that if key length is not equal to the length of the plaintext, then the key will be repeated until equal to the plaintext length, it course allows cryptanalysts to make the process of cryptanalysis. And weaknesses of the symmetric key cryptographic algorithm is the safety of key distribution factor, if the key is known by others, then the function of cryptography itself become useless. Based on two such weaknesses, in this study, we modify the key on Vigenere Cipher, so when the key length smaller than the length of plaintext entered, the key will be generated by a process, so the next key character will be different from the previous key character. In This study also applied the technique of Three-pass protocol, a technique which message sender does not need to send the key, because each using its own key for the message encryption and decryption process, so the security of a message would be more difficult to solved.

Journal ArticleDOI
TL;DR: In this article, a principal component analysis (PCA) based algorithm for the analysis and identification of flavonoids classes based on Fourier Transform Infrared spectroscopy (FTIR) spectrum was introduced.
Abstract: Flavonoid is one of the bioactive compounds that are currently used in pharmaceutical and medicinal industries due to their health benefit. The focus of current research is mainly on the extraction and isolation of bioactive compounds; however non to date has explored on the identification of flavonoids classes under the Fourier Transform Infrared spectroscopy (FTIR). This gap presents an opportunity for the application of statistical analysis which can identify the distinct wavenumbers range of flavone, flavanone and flavonol for their characterization in the FTIR spectrum. Development of algorithm based on principal component analysis (PCA) for the analysis and identification of flavonoids classes based on FTIR spectrum was introduced. Based on the results, five wavenumbers ranges have been identified as the distinct characteristics of flavonol, flavone and flavanone hence used for their identification.

Journal ArticleDOI
TL;DR: This work provides an insight in the basic mechanisms of deception based cyber defense and proposes a solution to enable deception systems to a broad range of users by a dynamic deployment strategy based on machine learning to adapt to the network context.
Abstract: A R T I C L E I N F O A B S T R A C T Article history: Received: 22 May, 2017 Accepted: 07 July, 2017 Online: 01 August, 2017 Network security is often built on perimeter defense. Sophisticated attacks are able to penetrate the perimeter and access valuable resources in the network. A more complete defense strategy also contains mechanisms to detect and mitigate perimeter breaches. Deceptive systems are a promising technology to detect, deceive and counter infiltrations. In this work we provide an insight in the basic mechanisms of deception based cyber defense and discuss in detail one of the most significant drawbacks of the technology: The deployment. We also propose a solution to enable deception systems to a broad range of users. This is achieved by a dynamic deployment strategy based on machine learning to adapt to the network context. Different methods, algorithms and combinations are evaluated to eventually build a full adaptive deployment framework. The proposed framework needs a minimal amount of configuration and maintenance.

Journal ArticleDOI
TL;DR: A state-of-the-art machine learning deep neural network and the divide-and-conquer approach to model large road stretches were adopted and the resulting predictions were better than predictions obtained using partial least squares regression.
Abstract: The availability of traffic data and computational advances now make it possible to build data-driven models that capture the evolution of the state of traffic along modeled stretches of road. These models are used for short-time prediction so that transportation facilities can be operated in an efficient way that guarantees a high level of service. In this paper, we adopted a state-of-the-art machine learning deep neural network and the divide-and-conquer approach to model large road stretches. The proposed approach is expected to be a tool used in daily routines to enhance proactive decision support systems. The proposed approach maintains spatiotemporal correlations between contiguous road segments and is suitable for practical applications because it divides the large prediction problem into a set of smaller overlapping problems. These smaller problems can be solved in a reasonable time using a medium configuration PC. The proposed approach was used to model 21.1- and 30.7-mile stretches of highway along I-15 and I-66, respectively. The resulting predictions were better than predictions obtained using partial least squares regression.

Journal ArticleDOI
TL;DR: Anomaly based network intrusion detection system can identify the new network threats and is proposed to use Real-time Big Data Stream Processing Framework, Apache Storm, for the implementation of network intrusion Detection system.
Abstract: A R T I C L E I N F O A B S T R A C T Article history: Received: 05 April, 2017 Accepted: 27 May, 2017 Online: 19 June, 2017 Network security implements various strategies for the identification and prevention of security breaches. Network intrusion detection is a critical component of network management for security, quality of service and other purposes. These systems allow early detection of network intrusion and malicious activities; so that the Network Security infrastructure can react to mitigate these threats. Various systems are proposed to enhance the network security. We are proposing to use anomaly based network intrusion detection system in this work. Anomaly based intrusion detection system can identify the new network threats. We also propose to use Real-time Big Data Stream Processing Framework, Apache Storm, for the implementation of network intrusion detection system. Apache Storm can help to manage the network traffic which is generated at enormous speed and size and the network traffic speed and size is constantly increasing. We have used Support Vector Machine in this work. We use Knowledge Discovery and Data Mining 1999 (KDD’99) dataset to test and evaluate our proposed solution.

Journal ArticleDOI
TL;DR: This research presents a probabilistic procedure to estimate the total number of particles in the E-modulus of the response of the immune system to methamphetamine.
Abstract: Nathaphon Boonnam*,1, Jumras Pitakphongmetha1, Siriwan Kajornkasirat1, Teerayut Horanont2, Deeprom Somkiadcharoen2, Jiranuwat Prapakornpilai2 1Department of Applied Mathematics and Informatics, Faculty of Science and Industrial Technology, Prince of Songkla University, Surat Thani Campus, 84000, Thailand 2School of Information, Communication and Computer Technologies, Sirindhorn International Institute of Technology, Thammasat University, 12120, Thailand

Journal ArticleDOI
TL;DR: It was demonstrated that CRISP-DM is presented as a true methodology in comparison with SEMMA, because it describes in detail each phase and task through its official documentation and concrete examples of its application.
Abstract: Article history: Received: 04 April, 2017 Accepted: 12 May, 2017 Online: 04 June, 2017 Among the most popular methodologies for development of data mining projects are CRISP-DM and SEMMA, This research paper explains the reason why it was decided to compare them from a specific case study. Therefore, this document describes in detail each phase, task and activity proposed by each methodology, applying it in the construction of a MODIS repository for studies of land use and cover change. In addition to the obvious differences between the methodologies, there were found other differences in the activities proposed by each model that are crucial in non-typical studies of data mining. At the same time, this research determines safely the advantages and disadvantages of each model for this type of case studies. When the MODIS product repository construction process was completed, it was found that the additional time used by CRISP-DM in the first phase was composed in the following phases, since the planning, definition of mining goals, and generation of contingency plans, allow developing the proposed phases without inconvenience. It was also demonstrated that CRISP-DM is presented as a true methodology in comparison with SEMMA, because it describes in detail each phase and task through its official documentation and concrete examples of its application.

Journal ArticleDOI
TL;DR: The caesar cipher modification will be combine with the transposition cipher, it would be three times encryption on this experiment that is caesar modification at first then the generated ciphertext will beencrypted with transposition, and last, the result from transposition will be encrypted again with the second caesar modifications.
Abstract: A R T I C L E I N F O A B S T R A C T Article history: Received: 17 March, 2017 Accepted: 20 April, 2017 Online: 13 June, 2017 The caesar cipher modification will be combine with the transposition cipher, it would be three times encryption on this experiment that is caesar modification at first then the generated ciphertext will be encrypted with transposition, and last, the result from transposition will be encrypted again with the second caesar modification, similarly at the decryption but the process is reversed. In the modification of caesar cipher, what would be done is the shift of letters are not based on the alphabet but based on ASCII table, plaintext will get the addition of characters before encryption and then the new plaintext with the addition of characters will be divided into two, they are plaintext to be encrypted and plaintext are left constantly (no encryption), The third modification is the key that is used dynamically follows the ASCII plaintext value.



Journal ArticleDOI
TL;DR: Photographs courtesy of Universidad Nacional del Litoral and Facultad de Ingenieria y Ciencias Hidricas.
Abstract: Fil: Romero, Lucila Universidad Nacional del Litoral Facultad de Ingenieria y Ciencias Hidricas; Argentina


Journal ArticleDOI
TL;DR: The present study is primarily concerned with confirming this theoretical framework so as to ultimately secure the virtual machine image in cloud computing, and will be achieved by carrying out interviews with experts in the field of cloud security.
Abstract: The concept of cloud computing has arisen thanks to academic work in the fields of utility computing, distributed computing, virtualisation, and web services. By using cloud computing, which can be accessed from anywhere, newly-launched businesses can minimise their start-up costs. Among the most important notions when it comes to the construction of cloud computing is virtualisation. While this concept brings its own security risks, these risks are not necessarily related to the cloud. The main disadvantage of using cloud computing is linked to safety and security. This is because anybody which chooses to employ cloud computing will use someone else’s hard disk and CPU in order to sort and store data. In cloud environments, a great deal of importance is placed on guaranteeing that the virtual machine image is safe and secure. Indeed, a previous study has put forth a framework with which to protect the virtual machine image in cloud computing. As such, the present study is primarily concerned with confirming this theoretical framework so as to ultimately secure the virtual machine image in cloud computing. This will be achieved by carrying out interviews with experts in the field of cloud security.

Journal ArticleDOI
TL;DR: This study presents briefly a solution that consists of combining RFID with smartcard based biometric to enhance security especially in access control scenarios and aims to give a clear vision of available solutions and techniques used to prevent and secure the RFID system from specific threats and attacks.
Abstract: Article history: Received: 11 November, 2017 Accepted: 01 December, 2017 Online: 14 December, 2017 Radio frequency Identification (RFID) is currently considered as one of the most used technologies for an automatic identification of objects or people. Based on a combination of tags and readers, RFID technology has widely been applied in various areas including supply chain, production and traffic control systems. However, despite of its numerous advantages, the technology brings out many challenges and concerns still not being attracting more and more researchers especially the security and privacy issues. In this paper, we review some of the recent research works using RFID solutions and dealing with security and privacy issues, we define our specific parameters and requirements allowing us to classify for each work which part of the RFID system is being secured, the solutions and the techniques used besides the conformity to RFID standards. Finally, we present briefly a solution that consists of combining RFID with smartcard based biometric to enhance security especially in access control scenarios. Hence the result of our study aims to give a clear vision of available solutions and techniques used to prevent and secure the RFID system from specific threats and attacks.

Journal ArticleDOI
TL;DR: A system is developed in order to improve the multiclass classification rate in Neuro-Prosthetics as the advancement in prosthetics control allows amputees to perform even more tasks.
Abstract: A R T I C L E I N F O A B S T R A C T Article history: Received: 30 May, 2017 Accepted: 28 June, 2017 Online: 15 July, 2017 The research in Neuro-Prosthetics is gaining more significance and popularity as the advancement in prosthetics control allows amputees to perform even more tasks. Indeed, the improvement of classification accuracy is a challenge in prosthetics control. In this research, a system is developed in order to improve the multiclass classification rate. Two classifiers namely Artificial Neural Network(ANN) and Support Vector Machine(SVM) are trained to recognize five different myoelectric motions of hand fingers. The Electromyography(EMG) signals are acquired using surface electrodes placed on the forearm at specific nodes. The signal conditioning is performed using two stage filtering and amplification followed by digitization process. The final version of EMG signals is correlated in joint time and frequency domain for best feature vectors done via Discrete Wavelet Transform (DWT). The feature vectors are used to train the ANN and SVM. The classification results show an exceptional performance of ANN with classification accuracy of 98.7%. over the SVM, which is 96.7%.

Journal ArticleDOI
TL;DR: An empirical study that examines the usage of known vulnerable statements in software systems developed in C/C++ and used for IoT shows that the most prevalent unsafe command used for most systems is memcpy, followed by strlen, which can be used to help train software developers on secure coding practices.
Abstract: Article history: Received: 02 June, 2017 Accepted: 21 July, 2017 Online: 15 August, 2017 An empirical study that examines the usage of known vulnerable statements in software systems developed in C/C++ and used for IoT is presented. The study is conducted on 18 open source systems comprised of millions of lines of code and containing thousands of files. Static analysis methods are applied to each system to determine the number of unsafe commands (e.g., strcpy, strcmp, and strlen) that are well-known among research communities to cause potential risks and security concerns, thereby decreasing a system’s robustness and quality. These unsafe statements are banned by many companies (e.g., Microsoft). The use of these commands should be avoided from the start when writing code and should be removed from legacy code over time as recommended by new C/C++ language standards. Each system is analyzed and the distribution of the known unsafe commands is presented. Historical trends in the usage of the unsafe commands of 7 of the systems are presented to show how the studied systems evolved over time with respect to the vulnerable code. The results show that the most prevalent unsafe command used for most systems is memcpy, followed by strlen. These results can be used to help train software developers on secure coding practices so that they can write higher quality software systems.

Journal ArticleDOI
TL;DR: The global control system proposed in this paper contain this type of MRAC estimator together with PI-control based, who ensures a good dynamic performance but in a lower complexity of structure such that are properly to implement in real time in a distributed control system with DSP in local network using the CANopen protocol.
Abstract: A R T I C L E I N F O A B S T R A C T Article history: Received: 04 April, 2017 Accepted: 05 May, 2017 Online: 17 May, 2017 In this article we'll tackle the control of multi-motors electric drives with high dynamic, with rapid changes in torque and speed, with rigid or flexible coupling of motors, where the control strategy is FOC (Field Oriented Control) for each drives and the distributed control in local network using the CANopen protocol. In the surface mining industry, from which the electric drive application for this article is selected, the general trend is toward using asynchronous motors with short-circuit rotor, due to the advantages of this motor both in terms of design and operation. In order to achieve the variable speed, must be used the static frequency converters with sensorless control, where speed is estimated using a Model References Adaptive Control Estimator. The global control system proposed in this paper contain this type of MRAC estimator together with PI-control based, who ensures a good dynamic performance but in a lower complexity of structure such that are properly to implement in real time in a distributed control system with DSP in local network using the CANopen protocol with advantages in terms of software technology, as well as control cost and flexibility of use. Following these directions a functional application was implemented and tested in practice.

Journal ArticleDOI
TL;DR: In this article, the statistical parameters are implemented to characterize the content of an image and its texture, and the major issue addressed in the work is concentrated on brightness distribution via statistical measures applying different types of lighting.
Abstract: A R T I C L E I N F O A B S T R A C T Article history: Received: 11 April, 2017 Accepted: 24 June, 2017 Online: 17 July, 2017 Study the content of images is considered an important topic in which reasonable and accurate analysis of images are generated. Recently image analysis becomes a vital field because of huge number of images transferred via transmission media in our daily life. These crowded media with images lead to highlight in research area of image analysis. In this paper, the implemented system is passed into many steps to perform the statistical measures of standard deviation and mean values of both color and grey images. Whereas the last step of the proposed method concerns to compare the obtained results in different cases of the test phase. In this paper, the statistical parameters are implemented to characterize the content of an image and its texture. Standard deviation, mean and correlation values are used to study the intensity distribution of the tested images. Reasonable results are obtained for both standard deviation and mean value via the implementation of the system. The major issue addressed in the work is concentrated on brightness distribution via statistical measures applying different types of lighting.