scispace - formally typeset
Search or ask a question

Showing papers in "South African Computer Journal in 2017"


Journal ArticleDOI
TL;DR: A state-of-the-art survey of indoor positioning and navigation systems and technologies, and their use in various scenarios is presented, which analyses distinct positioning technology metrics such as accuracy, complexity, cost, privacy, scalability and usability.
Abstract: The research and use of positioning and navigation technologies outdoors has seen a steady and exponential growth. Based on this success, there have been attempts to implement these technologies indoors, leading to numerous studies. Most of the algorithms, techniques and technologies used have been implemented outdoors. However, how they fare indoors is different altogether. Thus, several technologies have been proposed and implemented to improve positioning and navigation indoors. Among them are Infrared (IR), Ultrasound, Audible Sound, Magnetic, Optical and Vision, Radio Frequency (RF), Visible Light, Pedestrian Dead Reckoning (PDR)/Inertial Navigation System (INS) and Hybrid. The RF technologies include Bluetooth, Ultra-wideband (UWB), Wireless Sensor Network (WSN), Wireless Local Area Network (WLAN), Radio-Frequency Identification (RFID) and Near Field Communication (NFC). In addition, positioning techniques applied in indoor positioning systems include the signal properties and positioning algorithms. The prevalent signal properties are Angle of Arrival (AOA), Time of Arrival (TOA), Time Difference of Arrival (TDOA) and Received Signal Strength Indication (RSSI), while the positioning algorithms are Triangulation, Trilateration, Proximity and Scene Analysis/ Fingerprinting. This paper presents a state-of-the-art survey of indoor positioning and navigation systems and technologies, and their use in various scenarios. It analyses distinct positioning technology metrics such as accuracy, complexity, cost, privacy, scalability and usability. This paper has profound implications for future studies of positioning and navigation.

138 citations


Journal ArticleDOI
TL;DR: The study found that the uptake of technology remains low, on average the frequency of usage per tool type was as follows: contextual tools (41%), sharing information and ideas tools (29%), experiential tools (26%) and reflective dialogue tools (18%).
Abstract: Information Communication Technology (ICT) integration in the classroom is often viewed as a panacea towards resolving South Africa’s education challenges. However, ICT integration in education in South Africa has been severely limited by operational, strategic and pedagogic challenges. In part, addressing the strategic and operational challenges involves understanding the current landscape of ICT integration in schools. There is scant information on the practical enforcement of ICTs in the classroom. The aim of this research is to determine the extent of ICT usage in South African schools in order to obtain an understanding of the practical enforcement of ICTs at the school level. This study combines both qualitative and quantitative data collection methods in order to provide a rich nuanced perspective of ICT integration in South African schools. The study found that the uptake of technology remains low, on average the frequency of usage per tool type was as follows: contextual tools (41%), sharing information and ideas tools (29%), experiential tools (26%) and reflective dialogue tools (18%). It was found that teachers are uncertain with respect to the enforcement of e-education while being encumbered by poor infrastructure and lack of skills.

47 citations


Journal ArticleDOI
TL;DR: The paper presents the outcomes of several discussions conducted with representatives from the municipal sector and reflects on the critical role that municipalities hold in pursuing e-Government and highlights the barriers identified by respondents that require consideration from local government.
Abstract: The objective of the paper was to understand the challenges towards e-Government implementation in South Africa. The paper contributes to the ongoing discussion regarding the challenges facing e-government implementations in developing nations. It presents the outcomes of several discussions conducted with representatives from the municipal sector. These included semi structured interviews and a workshop with 40 attendees resulting in qualitative primary data. Through the application of an inductive thematic data analysis, the paper reflects on the critical role that municipalities hold in pursuing e-Government. It further discusses the different stakeholders that may influence the manifestation of e-Government for municipalities. It also highlights the barriers identified by respondents that require consideration from local government. The barriers include governance related issues, access to resources, leadership, ICT skills and funding.

43 citations


Journal ArticleDOI
TL;DR: Analysis of the factors that affect the use and non-use of a Learning Management System by lecturers in a South African university showed that both internal and external factors are important in shaping use of LMS.
Abstract: The purpose of this research was to identify the factors that affect the use and non-use of a Learning Management System (LMS) by lecturers in a South African university. This research involved a qualitative case-study of lecturers, and utilised questionnaires for data collection. Findings showed that both internal and external factors are important in shaping use of LMS. Contrary to the literature, high levels of use were found amongst the respondents with a high perception of ease of use and usefulness. However, due to issues such as lack of ongoing training, more advanced features of the technology were not being utilised. It also emerged that patterns of use were affected by pre-existing practices and that the perception of the system was affected by differences to the previous system. This study contributed to literature by providing in-depth analysis of why certain factors affect lectures’ decision regarding LMS usage. Future research should consider the use of extended features of LMSs and the prior practices and systems used within the context of study to understand how they affect use or non-use of an LMS. This study contributes to practice through promoting understanding of why there is underuse of extended features of an LMS among lecturers.

19 citations


Journal ArticleDOI
TL;DR: The research proposes a short-term initiative in the form of a game-based approach, which will assist school learners in becoming more cyber safe and teach learners about the relevant cyber-related risks and threats.
Abstract: Virtually all school learners today have access to ICT devices and the internet at home or at school. More and more schools are using ICT devices to improve education in South Africa. ICT devices and internet access have enormous advantages and assist learners in learning and teachers in teaching more successfully. However, with these advantages come numerous ICT and cyber-risks and threats that can harm learners, for example cyber-bullying, identity theft and access to inappropriate material. Currently, South Africa does not have a long-term plan to grow a cyber-safety culture in its schools. This research therefore proposes a short-term initiative in the form of a game-based approach, which will assist school learners in becoming more cyber safe and teach learners about the relevant cyber-related risks and threats. The research is based on a quantitative survey that was conducted among primary school learners to establish if the game-based approach would be a feasible short-term initiative. The aim of the research is to establish if a game based approach can be used to improve cyber-safety awareness. This approach was plotted into the required ICT and cyber-safety policy required by all schools.

16 citations


Journal ArticleDOI
TL;DR: A model for the usability of the IVR system to collect information from citizens to improve public safety in the city is provided and it is recommended that the city management must take these two elements in to account when designed or developing a participatory crowdsourcing system.
Abstract: ‘Smart Cities’ is a new and inventive approach that allows city management to use current infrastructure and resources more effectively. Participatory crowdsourcing is an effective method to collect data from the citizens, as it does not require costly new infrastructure and can be used by all citizens, regardless of their literacy level. To date, very few studies have investigated the usability these participatory crowdsourcing systems in a developing country context. The focus of this paper is then to provide a model for the usability of the IVR system to collect information from citizens to improve public safety in the city. The study makes use of a quantitative survey method. A questionnaire was completed by 361 participants of a public safety project hosted East London, South Africa. The data analysis was completed making use of factor analysis. The results indicated that efficiency and perceived satisfaction with the system was important elements that determined the usability of the system. The recommendation of the study is then that the city management must take these two elements in to account when designed or developing a participatory crowdsourcing system.

15 citations


Journal ArticleDOI
TL;DR: The Habermasian goal of the panel, and the objective of this research, was to make sense of the paradox from the different sectors’ worldviews involved in ICT skills, and to identify mutually acceptable means of dealing with the paradox.
Abstract: There is often criticism from industry that there are not enough ICT skilled professionals in the market, and that the situation may only be getting worse. On the other hand, some ICT graduates struggle to find jobs. This phenomenon is referred to as the ICT skills paradox. A recent panel at the 2015 South Africa Computer Lecturers Association (SACLA) conference composed of leaders from industry, academia and government discussed their perspectives on the ICT skills paradox. The Habermasian goal of the panel, and the objective of this research, was to make sense of the paradox from the different sectors’ worldviews involved in ICT skills, and to identify mutually acceptable means of dealing with the paradox. The discourse of the panel session was analysed using techniques from grounded theory. There were three overarching findings; South Africa needs a formal accreditation body which is sensitive to and reflective of the unique local contexts; there is a need for a central coordinating agency on ICT skills between academia, government and industry; and rather than attempt to define ICT or ICT skills, efforts should be placed on embracing transdisciplinary practices. Based on the findings, the paper makes recommendations on how to deal with the contrasts, the dynamism of the ICT sector, and how the current ICT skills paradox could be resolved in South Africa and similar developing country contexts. The paper also makes a contribution to ICT theory on how to achieve consensus and implement ICT strategies from seemingly contradictory sectors using Habermas’ theory on social interactions.

13 citations


Journal ArticleDOI
TL;DR: Results showed that there was a positive relationship between high-performing learners’ proficiency score in the midterm averaged test and that the proficiency test enhanced Learners’ performance in the paper-based midterm averaging test.
Abstract: Personalised, adaptive online learning platforms that form part of web-based proficiency tests play a major role in the improvement of the quality of learning in physics and assist learners in building proficiency, preparing for tests and using their time more effectively. In this study, the effectiveness of an adaptive learning platform, Wiley Plus ORION, was evaluated using proficiency test scores compared to paper-based test scores in a first-year introductory engineering physics course. Learners’ performance activities on the adaptive learning platform as well as their performance on the proficiency tests and their impact on the paper-based midterm averaged test were investigated using both qualitative and quantitative methods of data collection. A comparison between learners’ performance on the proficiency tests and a paper-based midterm test was done to evaluate whether there was a correlation between their performance on the proficiency tests and the midterm test. Focus group interviews were carried out with three categories of learners to elicit their experiences. Results showed that there was a positive relationship between high-performing learners’ proficiency score in the midterm averaged test and that the proficiency test enhanced learners’ performance in the paper-based midterm averaged test.

11 citations


Journal ArticleDOI
TL;DR: Results revealed that students perceived SYSPRO Latte to be easy to use and useful, and verified other studies identifying a correlation between PEOU and PU, and confirmed the benefits of simulation-based learning and m-learning particularly for content presentation.
Abstract: The hands-on use of complex, industrial Enterprise Resource Planning (ERP) systems in educational contexts can be costly and complex. Tools that simulate the hands-on use of an ERP system have been proposed as alternatives. Research into the perceived usefulness (PU) and perceived ease of use (PEOU) of these simulation tools in an m-learning environment is limited. As part of this study, an m-learning simulation application (SYSPRO Latte) was designed based on experiential learning theory and on a previously proposed theoretical framework for m-learning. The application simulates the hands-on experience of an ERP system. The purpose of this paper is to analyse the results of a study of 49 students who used SYSPRO Latte and completed a questionnaire on its PEOU and PU. The results revealed that students perceived SYSPRO Latte to be easy to use and useful, and verified other studies identifying a correlation between PEOU and PU. The study also confirmed the benefits of simulation-based learning and m-learning particularly for content presentation. The importance of considering design principles for m-learning applications was highlighted. This study is part of a larger, comprehensive research project that aims at improving learning of ERP systems in higher education.

10 citations


Journal ArticleDOI
TL;DR: This research suggests that sustainability needs to become a focus for IT project managers, but for this to happen, they require the relevant project management sustainability knowledge.
Abstract: The concept of sustainability is becoming more and more important in the face of dwindling resources and increasing demand. Despite this, there are still many industries and disciplines in which sustainability is not actively addressed. The requirement of meeting current and future needs is not an issue from which IT projects are exempt. Ensuring sustainability requires managing sustainability in all activities. The field of IT and sustainability is one in which literature is appearing, but at a slow pace and this leaves many unanswered questions regarding the state of sustainability in IT projects and the commitment of IT project managers to sustainability. In not knowing what the state of sustainability is, potential shortcomings remain unknown and corrective action cannot be taken. Quantitative research was conducted through the use of a survey in the form of a structured questionnaire. This research was cross-sectional as the focus was to assess the state of sustainability at a single point in time. IT project managers were randomly sampled to get an objective view of how committed they were to sustainability. This research made use of a project management sustainability maturity model to measure the extent to which sustainability is incorporated into IT projects. The findings are that IT project managers are not committed to sustainability. While the economic dimension yielded the best results, they were not ideal, and it is in fact the social and environmental dimensions that require the most attention. This lacking commitment to the social and environmental dimensions is not limited to select aspects within each dimension, as each dimension’s aspects are addressed to a similarly poor extent. This research suggests that sustainability needs to become a focus for IT project managers, but for this to happen, they require the relevant project management sustainability knowledge.

10 citations


Journal ArticleDOI
TL;DR: This paper presents a solution based on incremental extensions of the Support Vector Machine (SVM) learning paradigm that updates an existing SVM whenever new training data are acquired and introduces an on-line model selection approach in the incremental learning process.
Abstract: In this paper, we address the problem of learning an adaptive classifier for the classification of continuous streams of data. We present a solution based on incremental extensions of the Support Vector Machine (SVM) learning paradigm that updates an existing SVM whenever new training data are acquired. To ensure that the SVM effectiveness is guaranteed while exploiting the newly gathered data, we introduce an on-line model selection approach in the incremental learning process. We evaluated the proposed method on real world applications including on-line spam email filtering and human action classification from videos. Experimental results show the effectiveness and the potential of the proposed approach.

Journal ArticleDOI
TL;DR: An empirical literature review is utilized to build and present an integrated, descriptive model for the design, development, and use of DSSs that rely on procedural rationality as an alternative strategy for resolving wicked problems.
Abstract: Wicked problems are hyper-complex problems that are not solvable via traditional methods. Some common examples of these include issues such as poverty, climate change, business strategy, and general policy development, which all have high stakes and no straightforward solution. The ambiguity of these problems can be particularly frustrating for the individuals and organizations that encounter them, as the very essence of these problems is elastic and unstable. Additionally, attempts to tame wicked problems tend to be irrevocable — for better or for worse — as the problem itself shifts in unpredictable ways in response. Decision support systems (DSSs) have long been considered a panacea for a number of highly complex problems in light of their potential to store, retrieve, and manipulate information to aid decision making. However, classical DSSs, being originally intended for semi-structured types of problems, are rendered practically impotent in the presence of wicked problems and their associated complexities. Thus, this article investigates the possibility of DSSs that rely on procedural rationality as an alternative strategy for resolving wicked problems. An empirical literature review is utilized to build and present an integrated, descriptive model for the design, development, and use of such DSSs for resolving wicked problems.

Journal ArticleDOI
TL;DR: A proposed model representing VLS quality in use characteristics, measured by the constructs of perceived usefulness and perceived importance is presented, which has potentially useful implications for the use of a VLS by educators and future VLS design iterations.
Abstract: In higher education institutions, digital learning environments, referred to as virtual learning systems in this article, have been adopted and are becoming increasingly popular among academics. A virtual learning system (VLS) has a suite of tools with associated functions and non-functional system characteristics. Higher education institutions (HEIs) implement a VLS with the intent to assimilate e-Learning with face-to-face instruction and thereby derive associated benefits from its usage. Currently there is limited information on educators’ perceptions on the usefulness of VLS tool functionality, and the importance assigned to non-functional characteristics. This article adapted the generic framework of the ISO 9126 external software quality model to ascertain the perceptions of educators with regards to VLS functionality and non-functional quality characteristics. A case study research strategy was adopted, using two South African higher education institutions. The main contribution of this article is a proposed model representing VLS quality in use characteristics, measured by the constructs of perceived usefulness and perceived importance. In addition to the theoretical contribution, this article makes a practical contribution by providing educators’ recommendations on the improvement of VLS quality characteristics. This article has potentially useful implications for the use of a VLS by educators and future VLS design iterations.

Journal ArticleDOI
TL;DR: A poorly structured field, with many researchers proposing new guidelines, but little incremental refinement of extant guidelines is discovered, which leaves designers without a clear way of discriminating between guidelines, and could contribute to the lack of deployment.
Abstract: Guidelines are recommended as a tool for informing user interface design. Despite a proliferation of guidelines in the research literature, there is little evidence of their use in industry, nor their influence in academic literature. In this paper, we explore the research literature related to mobile phone design guidelines to find out why this should be so. We commenced by carrying out a scoping literature review of the mobile phone design guideline literature to gain insight into the maturity of the field. The question we wanted to explore was: “Are researchers building on each others’ guidelines, or is the research field still in the foundational stage?” We discovered a poorly structured field, with many researchers proposing new guidelines, but little incremental refinement of extant guidelines. It also became clear that the current reporting of guidelines did not explicitly communicate their multi-dimensionality or deployment context. This leaves designers without a clear way of discriminating between guidelines, and could contribute to the lack of deployment we observed. We conducted a thematic analysis of papers identified by means of a systematic literature review to identify a set of dimensions of mobile phone interface design guidelines. The final dimensions provide a mechanism for differentiating guidelines and expediting choice.

Journal ArticleDOI
TL;DR: A case study on the scalability of several versions of the molecular dynamics code (DL_POLY) performed on South Africa’s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network.
Abstract: This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY) performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

Journal ArticleDOI
TL;DR: Findings show that the university’s elearning platform is utilised for some of their courses; however students seem to prefer free and open source platforms.
Abstract: The use of social and collaborative computing has the potential of assisting learning and improving the ability to work together as part of a team. Team work is a graduate attribute that students need to acquire before transitioning from university into the workplace. The aim of this exploratory research was to provide insights into the use of social and collaborative applications by Computer Science students, and the emergent affordances student project teams have created with the use of these applications. It answers the questions: What elearning platforms or applications do students use to collaborate for team projects? What technology affordance draws students to use these applications? This study adopts affordance theory as the theoretical framework. Two types of content analysis: conventional content analysis and summative content analysis were used to analyse the data. Data was gathered using a pre-designed questionnaire with the teams during the first semester of 2016. Findings show that the university’s elearning platform is utilised for some of their courses; however students seem to prefer free and open source platforms. Student project teams used applications such as WhatsApp, Telegram, Dropbox, Google Drive, Google Docs, as well as email messages, to work jointly, and were successfully able to complete their team projects. Four types of technology affordances: communicative-affordance, document share-affordance, course resource-affordance, and integrity-affordance, were identified as being relevant.

Journal ArticleDOI
TL;DR: A conceptual framework has been developed to shed light on secondary school teachers’ differential adoption of tablet technology, derived from the work of Bernstein on the pedagogic discourse and alongside Hooper and Rieber’s model on educational technology adoption.
Abstract: While recent technological innovations have resulted in calls to incorporate tablets into the classroom, schools have been criticised for not taking advantage of what the technology has to offer. Past research has shown that teachers do not automatically choose to adopt technology in the classroom. A number of concerns exist in relation to the research being conducted within this area. Firstly, the majority of research studies have not been based on sound conceptual frameworks. Secondly, for the most part, these research studies have tended to focus on the technology itself rather than the resulting changes in teaching and learning. Finally, much of the literature is premised on constructivist pedagogic practices which offer promissories of radical pedagogic change. An understanding of technology teachers’ orientations to the new technology, coupled with an understanding of the reasons behind teachers’ choices to adopt or not adopt technology has not yet been fully explored. From a review of the literature in relation to teachers’ Professional Dispositions, derived from the work of Bernstein on the pedagogic discourse, alongside Hooper and Rieber’s model on educational technology adoption a conceptual framework has been developed to will shed light on secondary school teachers’ differential adoption of tablet technology.

Journal ArticleDOI
TL;DR: A business process owner competency framework was developed and discussed, showing that business process owners require competencies in core business process management, strategic alignment, determining organizational goals, governance, documentation, training, and systemic thinking.
Abstract: Process owners are vital to the establishment and functioning of process oriented organizations. However, there is a paucity of understanding regarding the tasks process owners should undertake and what competencies they require. In this study, sets of process owner competencies and process owner tasks emerged from interviews with executives from three financial services organizations in South Africa. The findings were compared to the BPTrends report “State of the Business Process Management Market 2016”. Common themes were identified and validated against recent literature. Based on the validated themes a business process owner competency framework was developed and discussed. The framework shows that business process owners require competencies in core business process management, strategic alignment, determining organizational goals, governance, documentation, training, and systemic thinking. The competencies and tasks identified provide a practical contribution to practitioners and recruiters in the field, while the framework adds a theoretical contribution to the field of business process management.

Journal ArticleDOI
TL;DR: An algorithm that computes the shortest routes, assigns optimal flows to these routes and simultaneously determines optimal link capacities is proposed, which is to achieve statistical multiplexing advantages with multiple traffic and QoS classes of connections that share a common trunk present.
Abstract: The general Multiprotocol Label Switch (MPLS) topology optimisation problem is complex and concerns the optimum selection of links, the assignment of capacities to these links and the routing requirements on these links. Ideally, all these are jointly optimised, leading to a minimum cost network which continually meets given objectives on network delay and throughput. In practice, these problems are often dealt with separately and a solution iterated. In this paper, we propose an algorithm that computes the shortest routes, assigns optimal flows to these routes and simultaneously determines optimal link capacities. We take into account the dynamic adaptation of optimal link capacities by considering the same Quality of Service (QoS) measure used in the flow assignment problem in combination with a blocking model for describing call admission controls (CAC) in multiservice broadband telecommunication networks. The main goal is to achieve statistical multiplexing advantages with multiple traffic and QoS classes of connections that share a common trunk present. We offer a mathematical programming model of the problem and proficient solutions which are founded on a Lagrangean relaxation of the problem. Experimental findings on 2-class and 6-class models are reported.

Journal ArticleDOI
TL;DR: Four algorithms that convert arbitrary DFAs to language-equivalent FDFAs are empirically investigated, three of which are concrete variants of a previously published abstract algorithm, the DFA-Homomorphic Algorithm.
Abstract: Failure deterministic finite automata (FDFAs) represent regular languages more compactly than deterministic finite automata (DFAs). Four algorithms that convert arbitrary DFAs to language-equivalent FDFAs are empirically investigated. Three are concrete variants of a previously published abstract algorithm, the DFA-Homomorphic Algorithm (DHA). The fourth builds a maximal spanning tree from the DFA to derive what it calls a delayed input DFA. A first suite of test data consists of DFAs that recognise randomised sets of finite length keywords. Since the classical Aho-Corasick algorithm builds an optimal FDFA from such a set (and only from such a set), it provides benchmark FDFAs against which the performance of the general algorithms can be compared. A second suite of test data consists of random DFAs generated by a specially designed algorithm that also builds language-equivalent FDFAs, some of which may have non-divergent cycles. These random FDFAs provide (not necessarily tight) lower bounds for assessing the effectiveness of the four general FDFA generating algorithms.

Journal ArticleDOI
TL;DR: This work aimed to address the problem of interprocess communication in the Java programming language by utilising Microsoft Windows’ native IPC mechanisms through a framework known as the Java Native Interface, which enabled the use of native C code that invoked the IPC mechanism provided by Windows, which allowed successful synchronous communication between separate Java processes.
Abstract: The Java programming language provides a comprehensive set of multithreading programming techniques but currently lacks interprocess communication (IPC) facilities, other than slow socket-based communication mechanisms (which are intended primarily for distributed systems, not interprocess communication on a multicore or multiprocessor system). This is problematic due to the ubiquity of modern multicore processors, and the widespread use of Java as a programming language throughout the software development industry. This work aimed to address this problem by utilising Microsoft Windows’ native IPC mechanisms through a framework known as the Java Native Interface. This enabled the use of native C code that invoked the IPC mechanisms provided by Windows, which allowed successful synchronous communication between separate Java processes. The results obtained illustrate the performance dichotomy between socket-based communication and native IPC facilities, with Windows’ facilities providing significantly faster communication. Ultimately, these results show that there are far more effective communication structures available. In addition, this work presents generic considerations that may aid in the eventual design of a generic, platform-independent IPC system for the Java programming language. The fundamental considerations include shared memory with semaphore synchronisation, named pipes and a socket communication model.

Journal ArticleDOI
TL;DR: This work demonstrates a new approach for improved image-based biometric feature-fusion that extracts and combines the face, fingerprint and palmprint at the feature level for improved human identification accuracy.
Abstract: The feature level, unlike the match score level, lacks multi-modal fusion guidelines. This work demonstrates a new approach for improved image-based biometric feature-fusion. The approach extracts and combines the face, fingerprint and palmprint at the feature level for improved human identification accuracy. Feature-fusion guidelines, proposed in our recent work, are extended by adding a new face segmentation method and the support vector machine classifier. The new face segmentation method improves the face identification equal error rate (EER) by 10%. The support vector machine classifier combined with the new feature selection approach, proposed in our recent work, outperforms other classifiers when using a single training sample. Feature-fusion guidelines take the form of strengths and weaknesses as observed in the applied feature processing modules during preliminary experiments. The guidelines are used to implement an effective biometric fusion system at the feature level, using a novel feature-fusion methodology, reducing the EER of two groups of three datasets namely: SDUMLA face, SDUMLA fingerprint and IITD palmprint; MUCT Face, MCYT Fingerprint and CASIA Palmprint.

Journal ArticleDOI
TL;DR: The research objective was to provide a software emulator which provides debugging, testing and prototyping capabilities for a MAR application including the ability to emulate the combination of computer vision with locational and motion sensors using previously recorded data.
Abstract: Augmented Reality (AR) provides a fusion of the real and virtual worlds by superimposing virtual objects on real world scenery. The implementation of AR on mobile devices is known as Mobile Augmented Reality (MAR). MAR is in its infancy and MAR development software is in the process of maturing. Dating back to the origin of Computer Science as an independent field, software development tools have been an integral part of the process of software creation. MAR, being a relatively new technology, is still lacking such related software development tools. With the rapid progression of mobile devices, the development of MAR applications fusing advanced Computer Vision techniques with mobile device sensors have become increasingly feasible. However, testing and debugging of MAR applications present a new challenge in that they require the developer be at the location that is being augmented at some point during the development process. In this research study, a MAR recorder application was developed as well as emulation class libraries for Android devices that allows the recording and off-site playback of video, location and motion sensor data. The research objective was to provide a software emulator which provides debugging, testing and prototyping capabilities for a MAR application including the ability to emulate the combination of computer vision with locational and motion sensors using previously recorded data. The emulator was evaluated using different mobile technologies. The results indicate that this research could assist developers of MAR applications to implement applications more rapidly, without being at the location.

Journal ArticleDOI
TL;DR: The design and testing of both the C++ version and the GPU-accelerated version of FLORA are presented, which involves testing the system with a wider variety of leaves and trying different machine learning algorithms for the leaf prediction routines.
Abstract: The Cape Floristic Kingdom (CFK) is the most diverse floristic kingdom in the world and has been declared an international heritage site. However, it is under threat from wild fires and invasive species. Much of the work of managing this natural resource, such as removing alien vegetation or fighting wild fires, is done by volunteers and casual workers. Many fynbos species, for which the Table Mountain National Park is known, are difficult to identify, particularly by non-expert volunteers. Accurate and fast identification of plant species would be beneficial in these contexts. The Fynbos Leaf Optical Recognition Application (FLORA) was thus developed to assist in the recognition of plants of the CFK. The first version of FLORA was developed as a rapid prototype in MATLAB; it utilized sequential algorithms to identify plant leaves, and much of this code was interpreted M files. The initial implementation suffered from slow performance, though, and could not run as a lightweight standalone executable, making it cumbersome. FLORA was thus re-developed as a standalone C++ version that was subsequently enhanced further by accelerating critical routines, by running them on a graphics processing unit (GPU). This paper presents the design and testing of both the C++ version and the GPU-accelerated version of FLORA. Comparative testing was done on all three versions of FLORA, viz., the original MATLAB prototype, the C++ non-accelerated version, and the C++ GPU-accelerated version to show the performance and accuracy of the different versions. The accuracy of the predictions remained consistent across versions. The C++ version was noticeable faster than the original prototype, achieving an average speed-up of 8.7 for high-resolution 3456x2304 pixel images. The GPU-accelerated version was even faster, saving 51.85 ms on average for high-resolution images. Such a time saving would be perceptible for batch processing, such as rebuilding feature descriptors for all the leaves in the leaf database. Further work on this project involves testing the system with a wider variety of leaves and trying different machine learning algorithms for the leaf prediction routines.

Journal ArticleDOI
TL;DR: The xWCETT was evaluated with respect to quality of service related metrics and the results show that it outperformed the AODV and WCETT routing protocols.
Abstract: The increasing demand for broadband wireless technologies has led to the scarcity, inefficient utilization, and underutilization of the spectrum. The Cognitive Radio (CR) technology has emerged as the promising solution which improves the utilization of the spectrum. However, routing is a challenge due to the dynamic nature of the CR networks. The link quality varies in space and time as nodes join and leave the network. The network connectivity is intermittent due to node mobility and the activities of the primary user. The spectrum aware, spectrum agile, and interference aware routing protocols are vital for the sturdiness of the network and efficient utilization of the resources. We propose an interference aware, spectrum aware, and agile extended Weighted Cumulative Expected Transmission Time (xWCETT) routing protocol. The protocol integrates the features of the Ad-hoc On-demand Distance Vector (AODV) and the weighted cumulative expected transmission time (WCETT) routing protocols. The xWCETT was simulated using the Network Simulator 2 and its performance compared with the AODV and the WCETT routing protocols. The xWCETT was evaluated with respect to quality of service related metrics and the results show that it outperformed the AODV and WCETT routing protocols.