scispace - formally typeset
Search or ask a question

Showing papers in "Computer Science and Information Technology in 2019"


Journal ArticleDOI
TL;DR: An innovative model oriented on E-commerce sales neural network forecasting based on multi-attribute processing proves how it is possible to embed many data mining algorithms into a unique prototypal information system connected to a big data, and how it can work on real business intelligence.
Abstract: The proposed paper shows different tools adopted in an industry project oriented on business intelligence (BI) improvement. The research outputs concern mainly data mining algorithms able to predict sales, logistic algorithms useful for the management of the products dislocation in the whole marketing network constituted by different stores, and web mining algorithms suitable for social trend analyses. For the predictive data mining and web mining algorithms have been applied Weka, Rapid Miner and KNIME tools, besides for the logistic ones have been adopted mainly Dijkstra's and Floyd-Warshall's algorithms. The proposed algorithms are suitable for an upgrade of the information infrastructure of an industry oriented on strategic marketing. All the facilities are enabled to transfer data into a Cassandra big data system behaving as a collector of massive data useful for BI. The goals of the BI outputs are the real time planning of the warehouse assortment and the formulation of strategic marketing actions. Finally is presented an innovative model oriented on E-commerce sales neural network forecasting based on multi-attribute processing. This model can process data of the other data mining outputs supporting logistic actions. This model proves how it is possible to embed many data mining algorithms into a unique prototypal information system connected to a big data, and how it can work on real business intelligence. The goal of the proposed paper is to show how different data mining tools can be adopted into a unique industry information system.

23 citations


Journal ArticleDOI
TL;DR: Evaluated classifiers in the prediction of chronic kidney disease dataset shows that J48 decision tree gave the best result but naive bayes had the lowest execution time therefore making it the fastest classifier.
Abstract: Data mining being an experimental science is very important especially in the health sector where we have large volumes of data. Since data mining is an experimental science, getting accurate predictions could be tasking. Getting maximum accuracy of each classifier is necessary. It is therefore important that the appropriate feature selection method should be selected. Feature selection is highly relevant in predictive analysis and should not be overlooked. It helps reduce the execution time and provide a more accurate and reliable result. Therefore, more researches on predictive analysis and how reliable these predictions are needs to be delved into. Application of data mining techniques in the health sector ensures that the right treatment is given to patients. This study was implemented using WEKA. This study is aimed at using 3 classifiers: multilayer perceptron, naive bayes and J48 decision tree in the prediction of chronic kidney disease dataset. The aim of this research is to evaluate the performance of the classifiers used based on the following metrics-accuracy, specificity, sensitivity, error rate and precision. Based on the performance metrics mentioned above, results shows that J48 decision tree gave the best result but naive bayes had the lowest execution time therefore making it the fastest classifier.

17 citations


Journal ArticleDOI
TL;DR: A new computation method for 2D line clipping against a rectangular window is introduced as a Scratch extension in order to assist computer graphics education and it is found to be very fast, simple and can be implemented easily in any programming language or integrated development environment.
Abstract: Line clipping is a fundamental topic in an introductory computer graphics course. An understanding of a line-clipping algorithm is reinforced by having students write actual code and see the results by choosing a user-friendly integrated development environment such as Scratch, a visual programming language especially useful for children. In this article a new computation method for 2D line clipping against a rectangular window is introduced as a Scratch extension in order to assist computer graphics education. The proposed method has been compared with Cohen-Sutherland, Liang-Barsky, Cyrus-Beck, Nicholl-Lee-Nicholl and Kodituwakku-Wijeweera-Chamikara methods, with respect to the number of operations performed and the computation time. The performance of the proposed method has been found to be better than all of the above-mentioned methods and it is found to be very fast, simple and can be implemented easily in any programming language or integrated development environment. The simplicity and elegance of the proposed method makes it suitable for implementation by the student or pupil in a lab exercise.

6 citations


Journal ArticleDOI
TL;DR: A machine learning (ML) model is proposed as a promising approach to improve the underlying failure phenomena in the AM process and how a ML model can be distributed to form an interactive learning network of smart AM components to fulfil the Industry 4.0 requirements.
Abstract: Additive manufacturing (AM) is a crucial component of a smart factory that promises to change traditional supply chains. However, the parts built using state-of-the-art 3D printers have noticeable unpredictable mechanical properties. In this paper, a machine learning (ML) model is proposed as a promising approach to improve the underlying failure phenomena in the AM process. The paper also describe how a ML model can be distributed to form an interactive learning network of smart AM components to fulfil the Industry 4.0 requirements including self-organization, distributed control, communication, and real-time decision-making capability.

5 citations


Journal ArticleDOI
TL;DR: An insight into the need for blockchain technology in developing economy; with Nigeria as a case study is provided with a recommendation on the role of ICT regulatory agency towards achieving a sustainable blockchain technology adoption.
Abstract: Governments of emerging economies need to stimulate awareness and adoption of emerging technologies to facilitate effective and efficient citizen service delivery. The need for an integrated Information Communication Technology ecosystem in developing countries, nurture the call for adoption of emerging technologies. Similar to every emerging technology, the adoption of blockchain is dependent on the extent that government and relevant stakeholders take the lead in supporting and unveiling market-creating innovation on blockchain platforms. This paper provides an insight into the need for blockchain technology in developing economy; with Nigeria as a case study. Possible challenges and limitation of blockchain implementation are highlighted with a recommendation on the role of ICT regulatory agency towards achieving a sustainable blockchain technology adoption. Keyword: Blockchain Governance Nigeria

5 citations


Journal Article
TL;DR: A convolutional neural network classifier built upon a Tensor flow framework for classifying a user-uploaded image as Eczema, Impetigo or Melanoma, which allows an online user to detect skin diseases in human and to make available, advises or possible medical actions in a precise short period.
Abstract: Skin diseases are reported to be the most common disease in humans among all age groups and a major cause of illness in sub-Saharan Africa. However, diagnosis and treatment of skin disease are seen to be difficult, due to the orthodox approaches used by many medical centers globally. In recent times, artificial intelligence has been applied to enhance computer vision applications to permit easy detection of patterns in images. Notwithstanding this breakthrough in technology, the dermatological process in Ghana is yet to be automated, making the diagnosis of skin disease difficult and time-consuming. The current study sought to develop a web-based skin disease detection system (Medilab-Plus), which allows an online user to detect skin diseases in human and to make available, advises or possible medical actions in a precise short period. A convolutional neural network classifier built upon a Tensor flow framework for classifying a user-uploaded image as Eczema, Impetigo or Melanoma. Experimental results of the proposed system exhibit disease identification accuracy of 88% for Atopic dermatitis, 85% for Acne vulgaris and 84.7% for Scabies.

5 citations


Journal ArticleDOI
TL;DR: The security attacks in WSN are identified and a major solution for these attacks is identified; service integrity and network availability, privacy and confidentiality, and the integrity of data is identified.
Abstract: Wireless Sensor Network (WSN) utilizes small sensors with constrained properties to broadcast, collect, and sense the data in numerous applications. As WSN is an interest gaining technology. Security challenges become the main issue, especially in task like mission critical application. In this article, we identify the security attacks in WSN and identify the major solution for these attacks. These attacks are of three major categories; service integrity and network availability, privacy and confidentiality, and the integrity of data and major solution for these attacks. In addition, we also explain the methods and techniques used in these categories for the defense purposes and summarize the open research issues in these areas.

5 citations


Journal ArticleDOI
TL;DR: This research delves into the emerging trends of data management methods, one of which is the agent based techniques and the active disk technology and also the use of Map-reduce functions in unstructured data management.
Abstract: In the recent years, the demand for data processing has been on the rise prompting researchers to investigate new ways of managing data. Our research delves into the emerging trends of data management methods, one of which is the agent based techniques and the active disk technology and also the use of Map-reduce functions in unstructured data management. Motivated by this new trend, our architecture employs mobile agents technology to develop an open source framework called SPADE to implement a simulation platform called SABSA. The architecture in this research compares the performance of four network storage architectures: Store and forward processes(SAF), Object Storage Devices(OSD), Mobile agent with a Domain Controller (DMC) enhanced with map-reduce function and Mobile agent with a Domain Controller and child DMC enhanced with Map-reduce (ABMR): both handling sorted and unsorted metadata. In order to accurately establish the performance improvements in the new hybrid agent based models and map-reduce functions, an analytic simulation model on which experiments based on the identified storage architectures were performed was developed and then analytical data and graphs were generated. The results indicated that all the agents based storage architectures minimize latencies by up to 45 % and reduce access time by up to 21% compared to SAF and OSD.

3 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the comparative effectiveness of teaching through the use of interactive multimedia and conventional teaching method in biology on senior high school students concerning students' achievement and found that both methods were quite effective for teaching Photosynthesis in Biology.
Abstract: This study investigated the comparative effectiveness of teaching through the use of interactive multimedia and conventional teaching method in biology on senior high school students concerning students' achievement. The pretest-posttest non-equivalent quasi-experimental design was used for this study. One hundred and ten (110) form three (Form 3) General Science students who had Biology as an Elective subject were selected for the study. They were grouped and labeled control and experimental groups. Students in the experimental group were taught through the use of interactive multimedia whereas the control group was taught through the traditional teaching approach. The study found that both methods were quite effective for teaching Photosynthesis in Biology. However, out of the two methods, the multimedia approach was found more suitable for teaching abstract topics. The study also reported no statistically significant differences in the students' academic performance by gender. These findings suggest that the academic achievements of students in Biology can be improved with multimedia instruction. The study recommends that the computer should be used to complement the teachers' teaching but should not take over the teaching process. A similar study could be carried out in a similar environment but should include more than one topic as this study used only one topic.

3 citations


Journal ArticleDOI
TL;DR: In this paper, the roles of social media in voters sensitization, the presence of INEC in the cyberspace and how INEC can make itself more active in the cyber-space for effective information dissemination and voter education.
Abstract: Social media has become a prominent and a powerful forum for voter enlightenment, political activism and fastest means of information dissemination. An individual without a social media account is seen in the society as obsolete. Social media has indeed become our lives personally and professionally. An average smart phone owner cannot do without visiting a social media platform daily. Social media therefore can be used effectively to target particular voters, encourage people to exercise their franchise and to make information go viral. Social media platforms, such as Instagram, Twitter, Facebook and YouTube help to activate citizens’ engagement in political life. The Independent National Electoral Commission (INEC) saddled with the responsibility of educating voters on their electoral roles and responsibilities unfortunately doesn’t have pronounced presence in the social space. This paper analyses the roles of social media in voters’ sensitization, the presence of INEC in the cyberspace and how INEC can make itself more active in the cyberspace for effective information dissemination and voter education.

3 citations


Journal ArticleDOI
TL;DR: A reputation system was formulated based on decision maker ratings, non-registered users’ ratings, and registered users' ratings that can be understood and implemented in a consumer-to-consumer e-commerce platform.
Abstract: Consumer-to-consumer e-commerce is a way of buying and selling of goods and services that involve consumers selling goods and services to other consumers. Over the years, the number of internet users in Nigeria have rapidly increased, which has in turn led to fast growth in the e-commerce market. The major setback in a Nigerian consumer-to-consumer e-commerce application is the lack of information about the history and the behavior of sellers. Existing reputation systems in literatures relied only on ratings provided by registered users; and this often resulted in cold-start problem. In this paper, a reputation system was formulated based on decision maker ratings, non-registered users’ ratings, and registered users’ ratings. The reputation system can be understood and implemented in a consumer-to-consumer e-commerce platform.

Proceedings ArticleDOI
TL;DR: In this article, the authors proposed an approach which combines community detection in multiplex networks, in which a layer represents a certain image feature, with super pixels, for image segmentation.
Abstract: Despite the large number of techniques and applications in the field of image segmentation, it is still an open research field. A recent trend in image segmentation is the usage of graph theory. This work proposes an approach which combines community detection in multiplex networks, in which a layer represents a certain image feature, with super pixels. There are approaches for the segmentation of images of good quality that use a single feature or the combination of several features of the image forming a single graph for the detection of communities and the segmentation. However, with the use of multiplex networks it is possible to use more than one image feature without the need for mathematical operations that can lead to the loss of information of the image features during the generation of the graphs. Through the related experiments, presented in this work, it is possible to identify that such method can offer quality and robust segmentations.

Journal ArticleDOI
TL;DR: By using the plaster replica method based on the 3D print of the facial model, the chart pattern of optimized small-face mask was achieved and shows that the main kind is short and narrow which is the fifth in this article.
Abstract: Introduction: In China, respirator is widely used to protect the public from air pollution The design of respirator is based on anthropometric date obtained from groups of people in RFTPs (respirator fit test panels) Meanwhile the respirator-user fit is not satisfied as unsatisfactory seal exist Methods: To solve the respirator-user fit problem in China, this study was divided into four parts: The public head-face measurement and analysis of head data clustering; reverse establishment of head model can be based on the clustering results; using the model, forward design of mask structure can be conducted Results: Combined with Rotation component matrix counting and the relative index, 3 out of 7 representative facial indexes can be used as clustering variables They are nose length, bitragion breadth and face height The optimal number of clusters was 5 determined by Mix-F statistics According to the methods of mathematical statistics, it shows that the main kind is short and narrow which is the fifth in this article By using the plaster replica method based on the 3D print of the facial model, the chart pattern of optimized small-face mask was achieved

Journal ArticleDOI
TL;DR: This work presents a proposal of a visual tool to assist the creation of academic research projects, dissertations and theses, which has the form of a framework, called Research Project Model Canvas with fields defined according to the needs of creating a research project.
Abstract: This work presents a proposal of a visual tool to assist the creation of academic research projects, dissertations and theses. Its metrics are based on business and management success cases. In the creation and management of projects in teams are used visual strategies to present and record the parameters involved in the scope of the project through a screen, which can be composed of a frame with predefined fields or connection lines forming a flowchart. There are tools that can provide researchers with the conditions to view isolated parts of the project as bibliographic references only or correlation nodes between keywords, then it becomes necessary to create a strategy that enables the creator of the project and the team involved to visualize the essence of the project in the eminence of being created and to predict needs, failures, objectives as well as to restructure the project to adapt the research conditions. This strategy has the form of a framework, called Research Project Model Canvas with fields defined according to the needs of creating a research project, and its tables are organized in a logical order of reading, presentation and connection between each one.

Journal ArticleDOI
TL;DR: Experimental results demonstrated that the proposed real time smart building temperature monitoring system is a cross-platform and low cost system for IoT applications.
Abstract: Wireless sensor network (WSN) has been widely adopted by applications of Internet of Things (IoT). In this paper, we present a practical design of real time temperature monitoring system for smart building by using XBee based mesh WSN. The proposed system collects temperature data from wireless sensors and dynamically displays those data using Matplotlib with Python. The data was recorded by the SQLite3 database simultaneously for cloud usage. Moreover, we evaluated the link quality of the wireless transmission in our system. Experimental results demonstrated that the proposed real time smart building temperature monitoring system is a cross-platform and low cost system for IoT applications. It could be easily applied to single board computer systems such as Raspberry Pi, Jetson Nano, or LatteePanda to further reduce the cost.

Journal ArticleDOI
TL;DR: This project proposes a method of converting pixel-based frames into a graphical vector format and applying motion tracking methods to compress the rendered video past current compression techniques, able to achieve an average compression rate of 88% over industry-standard compression algorithms for ten sample H.264-encoded animation videos.
Abstract: As the age of technology progresses, the demand for video with higher resolution and bitrate increases accordingly, and as video compression algorithms approach neartheoretically perfect compression, more bandwidth is necessary to stream higher-quality video. Higher pixel resolutions do not change the fact that scaling individual frames using bilinear or bi cubic filtering naturally causes the video to lose detail and quality. In addition, as the number of pixels per frame continues to increase, so does the file size of each frame and ultimately the file size of the fully rendered video. Over time, as file size increases and required bandwidth increases, the cost of hardware systems multiplies, requiring a solution for further compression. Using core concepts of computer vision and Bezier models, this project proposes a method of converting pixel-based frames into a graphical vector format and applying motion tracking methods to compress the rendered video past current compression techniques. The algorithm uses the canny operator to break down pixel-based frames into points and then obtains Bezier curves through taking the matrix pseudo inverse. By tracking the motion of these curves through multiple frames, we group curves with similar motion into “objects” and store their motion and components, thus compressing our rendered videos: adding scalability without losing quality. Through this approach, we are able to achieve an average compression rate of 88% over industry-standard compression algorithms for ten sample H.264-encoded animation videos. Future work with such approaches could include modeling different lighting or shading with similar Bezier splines as well as bypassing pixel-based recording altogether by introducing a method to record video in a directly scalable format.

Journal ArticleDOI
TL;DR: The method used for the testing was the comparison of the simulation results due to the settings that apply the standard system to the imitation of the reality of the system using modifications of the Norwegian traffic lights states, and the system was able to reduce the travel delays slightly.
Abstract: Traffic lights have a vital role as regulatory systems to control the vehicles flowed in urban networks. This research is based on the real case. The traffic lights are installed at a massive intersection of an urban network consisting of four sections. The systems control implements the modification of the Norwegian traffic lights states. The behavior of traffic lights states were modeled using Petri net method. For the model verification and validation, the invariants and simulation were applied. The purpose of the implementation of this control system was to reduce travel delays. The intersection performance level was good while the average travel delay on all sections was low. The method used for the testing was the comparison of the simulation results due to the settings that apply the standard system to the imitation of the reality of the system using modifications of the Norwegian traffic lights states. The control system was able to reduce the travel delays slightly. The average of Level of Service (LoS) index of the roads for all sections was at level D. It improved the performance of the intersection, but not yet significant. In addition to setting traffic lights, the presence of flyovers is urgent to improve travel delays.

Journal ArticleDOI
TL;DR: The proposed model automatically learns and classifies clinical sentences into multi-faceted clinical classes, which can help physicians to navigate patients' medical histories easily and follow a generalized conclusion on clinical documents classification and references.
Abstract: Deep learning has achieved remarkable performance in many classification tasks such as image processing and computer vision. Due to its impressive performance, deep learning techniques have found their way into natural language processing tasks as well. Deep learning methods are based on neural network architectures such as CNN (Convolutional Neural Networks) with many layers. Deep learning methods have shown state of-the-art performance on many classification tasks through several research works. It has shown great promise in many NLP (Natural language processing) tasks such as learning text representations. In this paper, we study the possibility of using deep learning methods and techniques in clinical documents classification. We review various deep learning-based techniques and their applications in classifying clinical documents. Further, we identify research challenges and describe our proposed convolutional neural network with residual connections and range normalization. Our proposed model automatically learns and classifies clinical sentences into multi-faceted clinical classes, which can help physicians to navigate patients' medical histories easily. Our propose technique uses sentence embedding and Convolutional Neural Network with residual connections and range normalization. To the best of our knowledge, this is the first time that sentence embedding and deep convolutional neural networks with residual connections and range normalization have been simultaneously applied to text processing. Lastly, this work follows a generalized conclusion on clinical documents classification and references.

Journal ArticleDOI
TL;DR: An integrated model for flexible job-shop scheduling problem with the maintenance activities is developed and two multi objective optimization methods are compared to find the pareto-optimal front in the flexibleJob-shop problem case.
Abstract: This paper develops an integrated model for flexible job-shop scheduling problem with the maintenance activities. Reliability models are used to perform the maintenance activities. This model involves two objectives: minimization of the maximum completion time for flexible job-shop production part and minimization of system unavailability for the PM (preventive maintenance) part. To aim the objectives, two decisions must be taken at the same time: assigning n jobs on m machines in order to minimize the maximum completion time and finding the appropriate times to perform PM activities to minimize the system unavailability. These objectives are obtained with considering dependent machine setup times for operations and release times for jobs. In advance, the maintenance activity numbers and PM intervals are not fixed. Two multi objective optimization methods are compared to find the pareto-optimal front in the flexible job-shop problem case. Promising the obtained results, a benchmark with a large number of test instances is employed.

Journal ArticleDOI
TL;DR: Complexity Theory and Interaction Theory are borrowed to shed light on why there may have so many different reporting CISO structures even for companies of the same size in the same industry faced with the same information security risks and recommend best practices for evolving an effective reporting structure.
Abstract: The ideal reporting structure for the Chief Information Security Officer (CISO) function is not yet settled. Should the CISO report to the Chief Information Officer, Chief Operations Officer, Chief Financial Officer, Chief Internal Auditor, General Counsel, or Chief Executive Officer? Although existing literature provides recommended reporting structures of the CISO position, most practitioners and researchers discourage the adoption of a ―one size fits all‖. This study borrows from Complexity Theory and Interaction Theory to shed light on ―Why‖ we may have so many different reporting CISO structures even for companies of the same size in the same industry faced with the same information security risks. Using Complexity Theory, we posit that although the initial CISO reporting structure is unpredictable; organizations as open systems have an inbuilt capacity to self-organize, self-motivate, and learn to adapt the CISO reporting structure to their own work environment. Using Interaction Theory, we posit that the emerging reporting structure is created by the interaction between factors inherent in decision makers of the organization and factors inherent in the CISO function. This implies that ideal reporting structures of the information security organization will inevitably vary according to the organization‘s industry, mission, maturity, culture, risk exposure, resources, capabilities, and prevailing decision making and governance infrastructure. Using a case study research method, we relied on numerous CISO interviews available on open source and our own interviews of two seasoned CISOs. The study recommends best practices for evolving an effective reporting structure for the CISO function.

Journal Article
TL;DR: In this paper, the authors expose the experience during data collection at two public universities in Bayelsa state, Nigeria, describing the data collection process and highlighting the challenges faced, and encourage researchers to share their data collection experiences to help future researchers to be adequately prepared and encourage educational data mining researchers to support the goals of the educational Data Mining community by studying students from every learning environment.
Abstract: Educational data mining and learning analytics encourages developing methods for discovering student learning patterns and behaviors by investigating the distinct set of data available in learning environments. Researchers in this field of domain have developed useful models by exploring data from different learning environments. Most of the data used by these researchers come from computer based learning environment where datasets can be easily fetched and analyzed. For the goal of educational data mining to be fully realized, researchers must engage in investigating data from every learning environment, either in computer based learning environment or traditional learning environment; or institutions with information management systems or not. With is this view in mind, this research aim to expose the experience during data collection at two public universities in Bayelsa state, Nigeria. The research describes the data collection process and highlights the challenges faced. The research concludes by encouraging researchers to share their data collection experiences to help future researchers to be adequately prepared and encourage educational data mining researchers to support the goals of the educational data mining community by studying students from every learning environment.

Journal ArticleDOI
TL;DR: A developer or a programmer may find that the solution, TextGDS (SAS macro) is even better than the mainframe GDG structure in certain respects, and helps to fill the void with UNIX-SAS.
Abstract: IBM mainframes in the z/OS environment provide a generational structure often referred to as Generation Data Group (GDG) for file storage to maintain data snapshots of related data.[1] These data resulting from business operations within a servicing organization are not uncommon. This structure can hold TEXT data sets without a problem. However, in the case of a UNIX or Linux platform, a comparable structure is unavailable for use by SAS for storing data as TEXT files. This paper contains a solution to this problem and shows a comparison of what the mainframe GDG offers and the solution offered. A developer or a programmer may find that the solution, TextGDS (SAS macro) is even better than the mainframe GDG structure in certain respects. Although there are both limitations and delimitations when using TextGDS, the tool helps to fill the void with UNIX-SAS.

Journal ArticleDOI
TL;DR: In this paper, the authors highlight the potential of investing in a comprehensive e-tourism mobile application in Saudi Arabia that would improve the tourism experience for local and international visitors to Saudi Arabia.
Abstract: The purpose of this paper is to highlight the potential of investing in a comprehensive e-tourism mobile application in Saudi Arabia that would improve the tourism experience for local and international visitors to Saudi Arabia. It took the city of Jeddah as a case to explore this need. The paper analysed four current applications to explore the gap that is required to be addressed. After that, an online structured survey was conducted with 70 responses to examine whether the local market is requiring such a solution to propose or not. The findings showed that even the locals are finding some difficulties when using the current solutions. The value of this study is to trigger the giant local and international players to consider proposing a comprehensive mobile application to get most of the travel experience and to generate a considerable source of income.

Journal ArticleDOI
TL;DR: This paper will leverage knowledge of artificial intelligence and genetic programming comparing mammalian genes to banking and credit transactions to predict instances of fraudulent transactions.
Abstract: This paper will leverage knowledge of artificial intelligence and genetic programming comparing mammalian genes to banking and credit transactions to predict instances of fraudulent transactions. After identifying features of the Genetic Algorithm associated to information in credit card transactions, this research will create a model of the information in transactions based upon the mammalian genes.

Journal ArticleDOI
TL;DR: A novel technique for different cases of nodes failure is proposed and the effectiveness of proposed algorithm has been proved by simulation results.
Abstract: In applications of wireless sensor network, especially those attending in hostile environments such as reconnaissance in battlefield, are vulnerable to significant damage due to which single, multiple or simultaneous sensor nodes failure may occur and acquires the wireless sensor network separated into several partitions. The rapid recovery of partition network is essential for inter-node connectivity. Recently, number of approaches proposed for the restoration of inter-node connectivity. However, these approaches have not considered the efficient use of energy, coverage aware and connectivity restoration in integrated manners. This paper fills the gap in integrated manner. A novel technique for different cases of nodes failure is proposed. Initially, sensor nodes are very densely deployed and whole network is relocating after the deployment that each node relocates its position in half of the communication range of neighbor node. The effectiveness of proposed algorithm has been proved by simulation results.