scispace - formally typeset
Search or ask a question

Showing papers in "Computer Engineering and Applications in 2020"


Journal ArticleDOI
TL;DR: The results showed that the fruit classification by using the extraction of Speeded up Robust Features (SURF) feature and SVM (Support Vector Machine) Classification method is quite maximal and accurate.
Abstract: Indonesia's various types of fruits can be met by the community. Many fruits that contain a source of vitamins are very beneficial to the body, or as an economic source for farmers. It's no wonder that many experts submit discoveries to increase the amount of productivity or just want to experiment with intelligent systems. Intelligent systems are specially designed machines in certain areas to adjust the capabilities made by the creators. This article provides the latest texture classification technique called Speeded up Robust Features (SURF) with the SVM (Support Vector Machine) method. In this concept, the representation of the image data is done by capturing features in the form of keys. SURF uses the determinant of the Hessian matrix to reach the point of interest in which descriptions and classifications are performed. This method delivers superior performance compared to existing methods in terms of processing time, accuracy, and durability. The results showed that the fruit classification by using the extraction of Speeded up Robust Features (SURF) feature and SVM (Support Vector Machine) Classification method is quite maximal and accurate. Result of 3 kinds of classification with SVM kernel function, SVM Gaussian with 72% accuracy, Polynomial SVM with 69.75% accuracy, and Linear SVM with 70.25% accuracy.

7 citations


Journal ArticleDOI
TL;DR: A garbage bin that can be monitored in real time and the communication between the garbage bin and the mobile phone is intended to help the garbage collector and the user to monitor the capacity of the garbage in a garbage bin.
Abstract: This paper discusses about a garbage bin that can be monitored in real time. The information of the garbage capacity can be obtained in the application that is integrated in the mobile phone. The communication between the garbage bin and the mobile phone is intended to help the garbage collector and the user to monitor the capacity of the garbage in a garbage bin. When it has been overloaded, the collector can manage the garbage by moving the garbage to the other bigger garbage bin. (landfill). This garbage bin has been tested and it could run well. It could open and close its cover as soon as it detected or did not detect the objects. It could also send the information of the garbage capacity to the mobile phone immediately with delay only 0.45-0.47 s.

6 citations


Journal ArticleDOI
TL;DR: A method to navigate robots based on human fingers cue, including "Forward, backward, turn right, turn left, and stop motion", is presented, and to some extent robot can follow human cues to navigate in its assigned location.
Abstract: The current technology enables automation using a robot to help or substitute humans in industry and domestic applications. This robot invasion to human life emerges a new requirement to set a method of communication between a human and a robot. One of the oldest languages is finger gesture, and this is easy to be applied method by implementing image detection that connected to the actuators of the robot to respond to human orders. This paper presents a method to navigate robots based on human fingers cue, including "Forward," "Backward," "Turn right," "Turn left," and "Stop" to generate the forward, backward, turn right, turn left, and stop motion. The finger detection is facilitated by a camera module (NFR2401L) with the image plane of 640 x 480 and 30 fps speed. The detection in coordinates x <43 and y <100, robot moves forward, in x <29 and y <100-coordinates , robot turns left, and in x <19 and y <100-coordinates , robot turns right. The experiment was conducted to show the effectiveness of the proposed method, and to some extent robot can follow human cues to navigate in its assigned location.

5 citations


Journal ArticleDOI
TL;DR: The paper discussed the concept of an automatic transport system using a weight sensor that can transport people and goods without the assistance of a driver and can lead to a new "normal" and reduced cost of manufacturing in the industry.
Abstract: The current pandemic situation insists that people find a way to create a physical distance, limiting the number of people in a closed room. The human need for commuting has led to the idea of an automatic transport system that can transport people and goods without the assistance of a driver. This idea can lead to a new "normal" and reduced cost of manufacturing in the industry. The paper discussed the concept of an automatic transport system using a weight sensor. An automatic vehicle is designed to transport loads of different packages and be allocated automatically based on the weight of the package. The system is designed to be as simple as possible to increase the scope for implementation.

4 citations


Journal ArticleDOI
TL;DR: A gas source localization (GSL) is demonstrated using a mini quadrotor as a mini flying sniffer robot and the algorithm employed is based on a bioinspired algorithm from insect behavioral searching and it is constrained to perform only in 2D dimension open space area.
Abstract: In this paper, we demonstrated a gas source localization (GSL) using a mini quadrotor as a mini flying sniffer robot. The algorithm employed is based on a bioinspired algorithm from insect behavioral searching and it is constrained to perform only in 2D dimension open space area. In this study, we deliver some information such as system development, and algorithm flowchart to highlight how this study can achieve the target goal. The performance of insect behavioral based for searching the source location shows an interesting result. Where we can achieve a satisfactory result to find the source position using a bioinspired algorithm. The experimental results are provided to evaluate the performance of the searching algorithm.

2 citations


Journal ArticleDOI
TL;DR: The prediction model using Stochastic Differential Equations (SDEs) is developed and it gives way to the analysis of data collected from varied respondents within universities leading to the generation of a student performance trajectory.
Abstract: Student performance prediction presents institutions and learners with results that assist them to gauge their academic abilities within their context of learning. Performance prediction has been done using different approaches over the years. In this case, stochastic modelling is used and it takes into consideration the use of random variables in the prediction process. The random variables are generated from different scenarios in order to generate a possible output. As a result, the generated output is used to indicate the likelihood of very rare occurrence scenarios which may or may not take place at a future date. With the vast availability of educational data that is available within the learning sector, this data forms the basis of input data that is required for the prediction of student performance within internet-worked environments. This paper develops the prediction model using Stochastic Differential Equations (SDEs). This then gives way to the analysis of data collected from varied respondents within universities leading to the generation of a student performance trajectory.

2 citations


Journal ArticleDOI
TL;DR: Anomaly detection is the method chosen in this study using the Isolation Forest algorithm as its classifier and the results obtained are very satisfying in terms of accuracy which can reach 99.5%.
Abstract: Authors name disambiguation (AND) is a complex problem in the process of identifying an author in a digital library (DL). The AND data classification process is very much determined by the grouping process and data processing techniques before entering the classifier algorithm. In general, the data pre-processing technique used is pairwise and similarity to do author matching. In a large enough data set scale, the pairwise technique used in this study is to do a combination of each attribute in the AND dataset and by defining a binary class for each author matching combination, where the unequal author is given a value of 0 and the same author is given a value of 1. The technique produces very high imbalance data where class 0 becomes 98.9% of the amount of data compared to 1.1% of class 1. The results bring up an analysis in which class 1 can be considered and processed as data anomaly of the whole data. Therefore, anomaly detection is the method chosen in this study using the Isolation Forest algorithm as its classifier. The results obtained are very satisfying in terms of accuracy which can reach 99.5%.

2 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used principal component analysis (PCA) and Eigen value decomposition (EVDC) to detect spectrum holes in wideband spectrum data, where the magnitude of the signal to noise ratio between Eigen Value 1 to 19 was high enough to show the that there exist a signal, while between 20 to 32 shows no signal by implication it indicates that these areas have high possibility of unoccupied spectrum holes.
Abstract: Ultra-Wideband Spectrum Hole identification using Principal Components and Eigen Value Decomposition evolve a method of detecting spectrum hole from complex and corrupted wide band spectrum signal, due to the effect of noise spectrum hole detection is usually a challenge in wideband signal, as the presence of noise give rise to error alert, that is, noise can be misconstrued for signal. Dimensionality reduction was first used as the first level of denoising technique, Principal component Analysis (PCA) was used in dimensioning Wide Band Spectrum Data; this was able to reduce the noise level in the signal which made it convenient for Fast Fourier Transform (FFT) to act on it. FFT was used to decompose the signal to 64 sub band channels and on further reduction using principal Component Analysis (PCA), a 32 Level sub-band decomposition was carried out. Eigen Value generated shows that the magnitude of the signal to Noise ratio between Eigen Value 1 to 19 was high enough to show the that there exist a signal, while between 20 to 32 shows no signal by implication it indicates that these areas have high possibility of unoccupied spectrum holes.

2 citations


Journal ArticleDOI
TL;DR: The advantage of Deep Learning to design medical informatics is described, due to such an approach is needed to make a good CDSS of health services.
Abstract: Medical Informatics to support health services in Indonesia is proposed in this paper. The focuses of paper to the analysis of Big Data for health care purposes with the aim of improving and developing clinical decision support systems (CDSS) or assessing medical data both for quality assurance and accessibility of health services. Electronic health records (EHR) are very rich in medical data sourced from patient. All the data can be aggregated to produce information, which includes medical history details such as, diagnostic tests, medicines and treatment plans, immunization records, allergies, radiological images, multivariate sensors device, laboratories, and test results. All the information will provide a valuable understanding of disease management system. In Indonesia country, with many rural areas with limited doctor it is an important case to investigate. Data mining about large-scale individuals and populations through EHRs can be combined with mobile networks and social media to inform about health and public policy. To support this research, many researchers have been applied the Deep Learning (DL) approach in data-mining problems related to health informatics. However, in practice, the use of DL is still questionable due to achieve optimal performance, relatively large data and resources are needed, given there are other learning algorithms that are relatively fast but produce close performance with fewer resources and parameterization, and have a better interpretability. In this paper, the advantage of Deep Learning to design medical informatics is described, due to such an approach is needed to make a good CDSS of health services.

1 citations


Journal Article
Chai Xiaofei, Liu Song, Qu Bin, Wang Qian, WU Weiguo 
TL;DR: In this paper, the authors present VEC-TSS.VEC-TTSS, a system for the verification of TSS-based VEC systems, which includes the following features:
Abstract: 具有病态规模的嵌套循环程序在进行循环分块时容易忽略分块因子对向量化的影响,导致非对齐数据访问,降低分块后循环代码的性能.提出了一种向量化友好的循环分块因子选择算法VEC-TSS.该算法对可向量化循环层以向量化收益分析确定分块因子,对其他循环层通过以局部性收益和并行粒度确定分块因子.实验结果表明,针对具有病态规模的循环程序,VEC-TSS算法与另外两种分块因子选择算法相比可以获得更好的程序加速比,同时具有良好的可扩展性.

1 citations


Journal ArticleDOI
TL;DR: A two-level scheduling algorithm based on software defined network (SDN) is proposed in this paper, which reduces delay and packet loss rate significantly and achieves QoS of power services.
Abstract: Due to the complicated structure, power communication network is difficult to guarantee the quality of service (QoS) of power services. A two-level scheduling algorithm based on software defined network (SDN) is proposed in this paper. Firstly, the priority-based scheduling method is used to meet the latency-sensitive of power service. Then, in order to alleviate congestion, queue bandwidth is adjusted according to network state information, which can be collected by the centralized control of SDN. Finally, the Mininet and Ryu controller are made use of building simulation environment. The test results show that the algorithm proposed in this paper reduce delay and packet loss rate significantly, which achieves QoS.

Journal ArticleDOI
TL;DR: New insight is presented into the ECG semantic segmentation problem is surmounted by a deep learning approach for automatic ECG wave-form and long short-term memory (LSTM) is proposed for this task.
Abstract: The classification of electrocardiogram (ECG) waveform segmentation techniques can be difficult due to physiological variation of heart rate and different characteristics of the different ECG waves in terms of shape, frequency, amplitude, and duration. The P-wave, PR-segment, QRS-complex, ST-segment, and T-wave are extracted as the feature for classification algorithm to diagnose specified cardiac disorders. This requires the implementation of algorithms that identify specific points within the ECG wave. Some previous computational algorithms for automatic classification of ECG segmentation are proposed to overcome limitations of manual inspection of the ECG. This study presents new insight into the ECG semantic segmentation problem is surmounted by a deep learning approach for automatic ECG wave-form. Long short-term memory (LSTM) is proposed for this task. This experimental study has been performed for six different waveforms of ECG signal that represents cardiac disorders obtained from the Physionet: QT database. Overall, LSTM performance achieved accuracy, sensitivity, specificity, precision, F1-score, is 93.36%, 86.85%, 95.78%, 81.79%, and 83.09%, respectively.

Journal ArticleDOI
TL;DR: The hexapod robot is created to replace the tasks of the rescue teams in searching for the victims of the disasters, so there are no more victims from the rescue team and can be re-developed in the future, using servos with greater torque and better control input than push button switch.
Abstract: Any kinds of natural disaster are undesirable. Loss and damage are the most experienced as they come. Property and people have to be relieved, and it's not an easy matter. Among the deaths caused by buildings, some may still be alive and need helps as soon as possible, but this is too risky for the rescue team since the location is still in dangerous level. Therefore, we created the detector hexapod robot to replace the tasks of the rescue teams in searching for the victims of the disasters, so there are no more victims from the rescue team. The hexapod robot is a six-legged robot which shapes and runs like a spider. This research focuses on the analysis of the push button switch as a robotic foot control input. This is because walking technique is an effective major factor in navigation of robots. A good method is required to maintain the height of the robot's foot while it is walking. So to solve this, the push button switch application is used along with the inverse kinematics calculations on each routine program in adjusting the position of the end effector on the floor surface. In shifting, the navigation runs well without any failure if the position of the foot does not touch the floor. The test is done in 2 steps, comparing the inverse kinematics calculations with x and y inputs which are applied to the robot program code then comparing the travel time condition by using push button switch and without push button switch. The result of robot in this study can be re-developed in the future, using servos with greater torque and better control input than push button switch.

Journal ArticleDOI
TL;DR: Testing of the device search strategy showed that the complexity of the algorithm used in searching for paths in the pedestrian network is , at worst-case scenario, where N is the number of paths available in the network, implying that the developed system is reliable and can be used in recognizing and navigating routes by the visual impaired in real-time.
Abstract: Visual Impairment is a common disability that results in poor or no eyesight, whose victims suffer inconveniences in performing their daily tasks. Visually impaired persons require some aids to interact with their environment safely. Existing navigation systems like electronic travel aids (ETAs) are mostly cloud-based and rely heavily on the internet and google map. This implies that systems deployment in locations with poor internet facilities and poorly structured environments is not feasible. This paper proposed a smart real-time standalone route recognition system for visually impaired persons. The proposed system makes use of a pedestrian route network, an interconnection of paths and their associated route tables, for providing directions of known locations in real-time for the user. Federal University of Technology (FUT), Minna, Gidan Kwanu campus was used as the case study. The result obtained from testing of the device search strategy on the field showed that the complexity of the algorithm used in searching for paths in the pedestrian network is , at worst-case scenario, where N is the number of paths available in the network. The accuracy of path recognition is 100%. This implies that the developed system is reliable and can be used in recognizing and navigating routes by the visual impaired in real-time.

Journal ArticleDOI
TL;DR: A high throughput LDPC decoder is proposed through the implementation of fully parallel architecture and a reduction in the maximum iteration limit, needed for complete error correction.
Abstract: Low-Density-Parity-Check (LDPC) based error control decoders find wide range of application in both storage and communication systems, because of the merits they possess which include high appropriateness towards parallelization and excellent performance in error correction. Field-Programmable Gate Array (FPGA) has provided a robust platform in terms of parallelism, resource allocation and excellent performing speed for implementing non-binary LDPC decoder architectures. This paper proposes, a high throughput LDPC decoder through the implementation of fully parallel architecture and a reduction in the maximum iteration limit, needed for complete error correction. A Galois field of eight was utilized alongside a non-uniform quantization scheme, resulting in fewer bits per Log Likelihood Ratio (LLR) for the implementation. Verilog Hardware Description Language (HDL) was used in the description of the non-binary error control decoder. The propose decoder attained a throughput of 10Gbps at 400-MHz clock frequency when synthesized on a ZYNQ 7000 Series FPGA.

Journal ArticleDOI
TL;DR: A color image enhancement technique using lifting wavelet transform (LWT) and contrast limited adaptive histogram equalization (CLAHE) to overcome the issue of noise amplification, over and under-enhancement in exiting enhancement techniques is presented.
Abstract: Color image enhancement is one of important process and actually a vital precursory stage to other stages in the field of digital image processing. This is due to the fact that the effectiveness of processes in this stage on the output determines the success of other stages for a quality overall performance. This paper presents a color image enhancement technique using lifting wavelet transform (LWT) and contrast limited adaptive histogram equalization (CLAHE) to overcome the issue of noise amplification, over and under-enhancement in exiting enhancement techniques. Test images from Computer Vision Database were used for the proposed technique and the performance was evaluated using PSNR and SSIM. Result obtained shows an average improvement of 56.4% and 20.98% in terms of PSNR and SSIM respectively.

Journal ArticleDOI
TL;DR: Based on the experiments, the proposed hybrid features for plant diseases detection achieved better performances than those of using color features only and suggest fast convergence of the proposed features as the good performance is achieved at low number of epoch.
Abstract: With advances in information technology, various ways have been developed to detect diseases in plants, one of which is by using Machine Learning. In machine learning, the choice of features affect the performance significantly. However, most features have limitations for plant diseases detection. For that reason, we propose the use of hybrid features for plant diseases detection in this paper. We append local descriptor and texture features, i.e. linear binary pattern (LBP) to color features. The hybrid features are then used as inputs for deep convolutional neural networks (DCNN) Support and VGG16 classifiers. Our evaluation on Based on our experiments, our proposed features achieved better performances than those of using color features only. Our results also suggest fast convergence of the proposed features as the good performance is achieved at low number of epoch.

Journal ArticleDOI
TL;DR: This paper discusses computer worm detection using machine learning and investigates and improves the generalization capability of autoencoders using regularization and deep autoen coders.
Abstract: This paper discusses computer worm detection using machine learning. More specifically, the generalization capability of autoencoders is investigated and improved using regularization and deep autoencoders. Models are constructed first without autoencoders and thereafter with autoencoders. The models with autoencoders are further improved using regularization and deep autoencoders. Results show an improved in the capability of models to generalize well to new examples.

Journal ArticleDOI
TL;DR: Monitor temperature, humidity, and CO gas level using environmental application with multi sensor network (MSN) using displays on the web and sensors as device to obtain data and information through mobile devices and other on internet network.
Abstract: This paper aimed to monitor temperature, humidity, and CO gas level using environmental application with multi sensor network (MSN). This system was applied in real life and real time, to be able to obtain data and information through mobile devices and other on internet network. In this research, environmental application is monitored remotely using displays on the web and sensors as device. This research obtained data in outdoor and indoor parking area also with obstacles and without obstacles, so it obtained the results from each of the different environmental conditions.

Journal ArticleDOI
TL;DR: This paper concludes that using the Naive Bayes algorithm in Indonesian GBAORD budget classification is suitable since the robustness of the algorithm is proved to be high with 96.788+-0.185% average accuracy.
Abstract: The Indonesian Government Budget Appropriations or Outlays for Research and Government (GBAORD) has been analyzed manually every year to measure the government expenditures in research and development. The analysis process involved several experts in making the budget classification. This method, commonly known as manual classification, has its downsides, which are time consumption and inconsistent result. Therefore, a study about implementing the machine learning method in GBAORD budget classification to avoid inconsistency is proposed in the previous research. For further analysis, this paper evaluates the performance of the Naive Bayes algorithm for the GBAORD budget classification. This paper aims to measure the robustness of the Naive Bayes to classify GBAORD data taken from 2017 until 2019. This paper uses three models of Naive Bayes with different preprocessing methods and features. This paper concludes that using the Naive Bayes algorithm in Indonesian GBAORD budget classification is suitable since the robustness of the algorithm is proved to be high with 96.788+-0.185% average accuracy.