scispace - formally typeset
Search or ask a question

Showing papers in "International journal of advanced computer science in 2013"


Journal Article
TL;DR: This article discusses genetic algorithms and their application to three specific examples; the basic principles upon which the genetic algorithms are based are discussed and an example of the use of a genetic algorithm for finding the roots of a Diophantine equation is presented.
Abstract: This article discusses genetic algorithms and their application to three specific examples. The basic principles upon which the genetic algorithms are based are discussed. An example of the use of a genetic algorithm for finding the roots of a Diophantine equation is presented. A genetic program is next used to approximate additional values in a tabulated function. The third case we consider is the development of stock exchange trading systems.

38 citations


Journal Article
TL;DR: In this article, the authors present an Android-based mobile application, called Everywhere Run, that aims at motivating and supporting people during their running activities, acting as a virtual personal trainer, assisting users during their run and helping them to stick to the right pace.
Abstract: Many medical researches, conducted on people from developed countries, have proved a strict correlation between some health diseases and a sedentary lifestyle. Obesity and linked pathologies like diabetes and cardiovascular diseases are alarmingly becoming ever more common in rich societies. The most effective solution to these problems, as reported by the former studies, is a healthy diet regime together with a constant and monitored physical activity. As a consequence, many research efforts have been carried on finding strategies for motivating people to exercise regularly. In this paper, by taking advantage of the growing spread of mobile devices on a worldwide scale, we present an Android-based mobile application, called Everywhere Run, that aims at motivating and supporting people during their running activities. It behaves as a virtual personal trainer, assisting users during their run and helping them to stick to the right pace. In this way, users can fully focus on the run. Most important, Everywhere Run fosters the interaction between users and real personal trainers, in order to make it easy to non expert people to start working out in a healthy and safe way.

24 citations


Journal Article
TL;DR: The Baobab is a storybook app for the iPad that was designed and developed based on research in visual learning, visual phonology, bilingualism, and Deaf children's cognitive development by the National Science Foundation-funded Science of Learning Center on Visual Language and Visual Learning (VL2) at Gallaudet University.
Abstract: The Baobab is the first bilingual storybook app for touchscreen tablets that was designed and developed based on research in visual learning, visual phonology, bilingualism, and Deaf children’s cognitive development by the National Science Foundation-funded Science of Learning Center on Visual Language and Visual Learning (VL2) at Gallaudet University. Developed by an all-deaf team, this VL2 storybook app is designed for early and emerging readers, bridging design principles in ASL storytelling and English text to research foundations, in order to facilitate reading and language acquisition for children who rely on the visual modality for learning.

16 citations


Journal Article
TL;DR: The tested hypothesis was found consistent with its predicted outcome: the fear of a lack of Interaction with other Humans is a negative predictor of intention to use e-services in Saudi Arabia.
Abstract: This paper reports the results of a mixed method approach to answer: To what extent do cultural values impact on e-service use in Saudi Arabia, and if so how? This paper will firstly, introduce the importance of culture and define the aspects of Saudi culture with focus on our scope: the fear of a lack of Interaction with other Humans. It will then describe the method used and present the qualitative and quantitative findings related to the need for Interactions with other Humans. Much of the written literature about human interaction aims at Information Systems design or design improvement. Yet, this is different to what is being investigated in this study. One of the factors this study will consider is the perceived lack of interaction with other humans or the anxiety people may feel in missing the physical interaction with other people by fully moving business interaction to the virtual world. The review of the literature indicates that the impact of such factor on Information and Communication Technologies (ICT) use has not been studied. This research aims to cover this gap by investigating to what extent the fear of a lack of Interaction with other Humans, as one of Saudi Arabia’s cultural values, impacts on e-service use in Saudi Arabia. The tested hypothesis was found consistent with its predicted outcome: the fear of a lack of Interaction with other Humans is a negative predictor of intention to use e-services in Saudi Arabia. It is evidenced that consideration of the impact of the cultural values will mainly contribute to the enhancement of ICTs implementation and use.

16 citations


Journal Article
TL;DR: In this paper, the static vulnerability verification tool that can be utilized for smartphone application developers and vendors in the implementation and/or test phase of development process is presented. But, this tool is not suitable for mobile applications that have been packaged as commercial products and distributed in application marketplaces.
Abstract: Nowadays, smartphone market has been growing rapidly, and smartphone has become essential as a business tool. One of the crucial advantages of a smartphone is an installable third-party application. Number of these has continued to grow explosively. However, vulnerabilities in smartphone applications are seemed as serious problem. This is not only for the smartphone users, also for smartphone application developers and/or vendors. Until now, most vulnerability tests on smartphone applications are targeted that has been packaged as a commercial product and distributed in application marketplaces. These tests are performed on dynamically on application binaries. In this paper, we aim to develop the static vulnerability verification tool that can be utilized for smartphone application developers and/or vendors in the implementation and/or test phase of development process. This tool intakes source codes and determine where to read the privacy information in the source codes, and determine where to write/send the information in there. Then analyze the privacy information transfer and/or transform flow and report the possibilities of privacy information leakage to application developers.

11 citations


Journal Article
TL;DR: The problem is to use 3D computer simulations in such a versatile way that those simulations could act as learning objects designed directly by those who own the experience they want to be transmitted.
Abstract: Using 3D computer simulations for training surgeons is not new, but neither is the use of e-learning for improving students knowledge acquisition. What we propose is to use 3D computer simulations in such a versatile way that those simulations could act as learning objects designed directly by those who own the experience we want to be transmitted. In order to achieve this goal, it is necessary to create a model in charge of communicating learning objects and simulators. This model ensures that, on the one hand, the simulation offers an interface to the learning process stable enough not to be affected by every small change. On the other hand, the model also ensures that the learning simulation offers a mechanism for adopting changes in the learning process. The key to solve this contradiction is to take the complex behavior of the simulation objects out of their control, leaving in them just very basic behaviors. This paper presents the problem and the design proposed to solve it in a more detailed way.

11 citations


Journal Article
TL;DR: This paper describes a method of annotation, based on concepts from Information Science, to build a domain ontology, using Natural Language Programming (NLP) technology, and uses Java Annotation Patterns Engine ( JAPE) grammars to support regular expression matching and thus annotate IS concepts using a GATE developer tool.
Abstract: ¾ Recently, unstructured data on the World Wide Web has generated significant interest in the extraction of text, emails, web pages, reports and research papers in their raw form. Far more interestingly, extracting information from a specific domain using distributed corpora from the World Wide Web is a vital step towards creating corpus annotation. This paper describes a method of annotation, based on concepts from Information Science, to build a domain ontology, using Natural Language Programming (NLP) technology. We used Java Annotation Patterns Engine ( JAPE) grammars to support regular expression matching and thus annotate IS concepts using a GATE developer tool. This speeds up the time-consuming development of the ontology which is important for experts in the domain facing time constraints and high workloads. The rules provide significant results: the pattern matching of IS concepts based on the lookup list produced 403 correct concepts and the accuracy was generally higher, with 0 partially correct, missing and false positive results. Using NLP technique is good approaches to reduce the domain expert’s work and they can be evaluated the results

9 citations


Journal Article
TL;DR: This work presents a methodology for large scale quantitative narrative analysis of text data, which includes various recent ideas from text mining and pattern analysis in order to solve a problem arising in digital humanities and social sciences.
Abstract: We present a methodology for large scale quantitative narrative analysis (QNA) of text data, which includes various recent ideas from text mining and pattern analysis in order to solve a problem arising in digital humanities and social sciences. The key idea is to automatically transform the corpus into a network, by extracting the key actors and objects of the narration, linking them to form a network, and then analyzing this network to extract information about those actors. These actors can be characterized by: studying their position in the overall network of actors and actions; generating scatter plots describing the subject/object bias of each actor; and investigating the types of actions each actor is most associated with. The software pipeline is demonstrated on text obtained from three story books from the Gutenberg Project. Our analysis reveals that our approach correctly identifies the most central actors in a given narrative. We also find that the hero of a narrative always has the highest degree in a network. They are most often the subjects of actions, but not the ones with the highest subject bias score. Our methodology is very scalable, and addresses specific research needs that are currently very labour intensive in social sciences and digital humanities.

9 citations


Journal Article
TL;DR: This paper expands previous analysis on SMS spam traffic from a tier-1 cellular operator, aiming to highlight the main characteristics of such messaging fraud activity and finds the main geographical sources of messaging abuse in the US.
Abstract: The Short Messaging Service (SMS), one of the most successful cellular services, generating millions of dollars in revenue for mobile operators yearly. Current estimations indicate that billions of SMSs are sent every day. Nevertheless, text messaging is becoming a source of customer dissatisfaction due to the rapid surge of messaging abuse activities. Although spam is a well tackled problem in the email world, SMS spam experiences a yearly growth larger than 500%. In this paper we expand our previous analysis on SMS spam traffic from a tier-1 cellular operator presented in [1], aiming to highlight the main characteristics of such messaging fraud activity. Communication patterns of spammers are compared to those of legitimate cell-phone users and Machine to Machine (M2M) connected appliances. The results indicate that M2M systems exhibit communication profiles similar to spammers, which could mislead spam filters. We find the main geographical sources of messaging abuse in the US. We also find evidence of spammer mobility, voice and data traffic resembling the behavior of legitimate customers. Finally, we include new findings on the invariance of the main characteristics of spam messages and spammers over time. Also, we present results that indicate a clear device reuse strategy in SMS spam activities.

7 citations


Journal Article
TL;DR: In this article, the authors used an approach that will be followed from the problem presentation until the development of a properly structured program, which allowed an increase in approval rating and in the quality of the solutions presented, also has proved to be adapted to the needs of teaching programming with different imperative programming languages.
Abstract: Programming teaching is a complex task, mainly because of the students’ difficulties on building structured solutions and also on problems interpretation. At introductory courses, we need to develop on students, programming skills to help them to apply their knowledge effectively on solving problems. This led us to use an approach that will be followed from the problem presentation until the development of a properly structured program. Its application in CS1/2, which we have taught in recent years, allowed an increase in approval rating and in the quality of the solutions presented, it also has proved to be adapted to the needs of teaching programming with different imperative programming languages. In this work we illustrate the approach with a simple example. We also present an evaluation of the methodology done with a population of 130 students at introductory courses using C and Java languages.

7 citations


Journal Article
TL;DR: This paper proposes architecture to address interoperability and portability issues in Cloud Computing with the collaboration of next emerging technology i.e. agents and XMPP protocol.
Abstract: Cloud Computing has become most demanding utility or service for the current era, because of its high computing power, performance, cheapness, accessibility, scalability, and availability. But still it is in infancy stage, and has some pitfalls which are due to non-existence of standards. Interoperability and portability are the two among the major issues in Cloud Computing. Authors have pointed out these issues and how actually interoperability and portability issues would be encountered? Authors propose architecture to address these two issues with the collaboration of next emerging technology i.e. agents and XMPP protocol. As there is an architecture proposed before using agents but in this paper first time both features of an agent i.e. intelligence and mobility are used in some particular way. Mobility is for movement among different clouds, as agents are interoperable by default as per FIPA (Foundation of Intelligent Physical Agent), and intelligence is to take the wise decision by keeping number of attributes in the database i.e. workload per service on each machine, distance between the clouds and services available on each cloud to fix the above cited problems.

Journal Article
TL;DR: A method of bidirectional interaction between a human and a humanoid robot in terms of emotional expressions to detect continuous transitions of human emotions that ranges from very sad to very happy using Active Appearance Models and Neural Evolution Algorithm.
Abstract: We present a method of bidirectional interaction between a human and a humanoid robot in terms of emotional expressions. The robot is able to detect continuous transitions of human emotions that ranges from very sad to very happy using Active Appearance Models (AAMs) and Neural Evolution Algorithm to determinate the face shape and gestures. As a response of the human emotions, the robot performs postural reactions that dynamically adapt to the human expressions, performing a body language which changes in terms of intensity as the human emotions vary. Our method is implemented in the HOAP-3 humanoid robot.

Journal Article
TL;DR: The results from the experiment showed that videowiki-based coursework affects both external and internal motivation equally in most cases, which reflects that from the perspective of constructivism the videowki-based assignment is equally effective compared to learning without this setting.
Abstract: constructivist learning, connectivism, problem-based learning Abstract The contemporary era of social media and web 2.0 has enabled a bottom-up on-line collaborative approach with easy content creation and subsequent knowledge sharing. The technically literate students of today and the changes in pedagogy towards a user-centred approach, where learners engage in the learning process by constructing new ideas and concepts based on their current or past knowledge facilitate the use of social media in learning environments. This paper describes the combination of a wiki and screen capture videos as a complementary addition to conventional lectures in an information management and information systems development course. The basis for our approach was collaborative problem-based learning with concrete problems defined by students. In order to activate students they were asked to identify unclear concepts or issues from four not well-defined or clarified lecture themes. The students worked in small groups. After the groups selected the theme which was most unclear to them they created presentations associated with these issues. Our intention was to facilitate collaborative learning by using the principles of the Jigsaw method. The results from the experiment showed that videowiki-based coursework affects both external and internal motivation equally in most cases. This reflects that from the perspective of constructivism the videowiki-based assignment is equally effective compared to learning without this setting. However, the development of knowledge concerning different course themes was positive in groups of students who completed this videowiki assignment. 1

Journal Article
TL;DR: This work provides a generic model-based approach to evaluate the satisfaction of NFRs taking into account their mutual impacts and dependencies and enables to compare different system design models and also identify parts of the system which can be good candidates for modification in order to achieve better satisfaction levels.
Abstract: One common goal followed by software engineers is to deliver a product which satisfies the requirements of different stakeholders. Software requirements are generally categorized into functional and Non-Functional Requirements (NFRs). While NFRs may not be the main focus in developing some applications, there are systems and domains where the satisfaction of NFRs is even critical and one of the main factors which can determine the success or failure of the delivered product, notably in embedded systems. While the satisfaction of functional requirements can be decomposed and determined locally, NFRs are interconnected and have impacts on each other. For this reason, they cannot be considered in isolation and a careful balance and trade-off among them needs to be established. We provide a generic model-based approach to evaluate the satisfaction of NFRs taking into account their mutual impacts and dependencies. By providing indicators regarding the satisfaction level of NFRs in the system, the approach enables to compare different system design models and also identify parts of the system which can be good candidates for modification in order to achieve better satisfaction levels.

Journal Article
TL;DR: Results show that a many-core processor can be used effectively on the cluster system minimizing a system overhead with this hybrid OS approach.
Abstract: This paper describes the design of an operating system to manage the hybrid computer system architecture with multi-core and many-core processors for exa-scale computing. In this study, a host operating system (Host OS) on a multi-core processor performs some functions of a lightweight operating system (LWOS) on a many-core processor instead, in order to dedicate to executing the parallel program on a many-core processor. Specifically, process execution and I/O processing on a many-core processor are supported by Host OS with effective access method. To demonstrate this design, we implement a prototype system using an Intel Xeon dual-core CPUs, and Linux and the original LWOS are loaded on to each processor. The basic performances about process controls and file I/O access for LWOS are evaluated. The LWOS process can be started with at least 110 msec overhead for the many-core program, and the bandwidth was about the same as for file I/O on Linux with an I/O access size of about 16 MB. These results show that a many-core processor can be used effectively on the cluster system minimizing a system overhead with this hybrid OS approach.

Journal Article
TL;DR: In this paper, a traffic classifier based on the theory of multifractal network traffic is presented, which uses precisely the concept of multiplicative binomial cascades to get a feature vector to be used in the classification scheme.
Abstract: In this work, we present a traffic classifier based on the theory of multifractal network traffic. We use precisely the concept of multiplicative binomial cascades to get a feature vector to be used in the classification scheme. This vector is obtained by the multiplier variances of the multiplicative cascade traffic view. We analyze the performance of the technique proposed by a popular ML Software-based and the results showed viability classification rates of traffic over 90%.

Journal Article
TL;DR: A new framework i.e., Password Authentication System for Cloud Environment (PASCE), which is immune to the common attacks suffered by other verification schemes is introduced.
Abstract: Verification is the major part of protection against compromising secrecy and authenticity. Though long-established login/password based schemes are easy to implement in the cloud environment, they have been subjected to numerous attacks. As a substitute, token and biometric based verification systems were introduced for security. However, they have not enhanced significantly to justify the expenditure. For providing more security-in this paper, we introduce a new framework i.e., Password Authentication System for Cloud Environment (PASCE), which is immune to the common attacks suffered by other verification schemes.

Journal Article
TL;DR: The aim is to analyze the conviction rate in cyber crime with comparison to both the countries and suggest various remedies.
Abstract: Today’s Global era needs laws governing fast paced cyber crime. The popularity of on-line transaction is on the rise thereby having attempts made by unscrupulous entities to defraud internet users. The modus operandi may be in the form of Hacking, Spoofing, Pornography, Scanners, Device, Fake card and the like. The Educational sectors, Defense sector, Law Enforcement Bodies, Bank sectors are exposed to risk as the information sought usually includes data such as username, passwords, bank account and credit card number, revelation of which is huge loss for not only every individual but also the state at large. The paper is an analysis of the USA Laws for Cyber Crime with a comparative analysis with the Indian Laws. The aim is to analyze the conviction rate in cyber crime with comparison to both the countries and suggest various remedies.

Journal Article
TL;DR: An experiment that has investigated the impact of hyper- and microgravity while performing selection tasks in a head-mounted Augmented Reality environment and the correlation between the human body frame of reference and haptic feedback by precise pointing movements towards a target is presented.
Abstract: Future user interface technologies will shift away from conventional displays, mice and keyboards and will claim the joint use of our physical world. Augmented Reality bridges this gap by enhancing the real world with virtual information that can lead to an improved perception of our daily work life or complex tasks. At altered gravitational conditions, working in space station denotes an increased workload of astronauts' performances of on-board activities. The use of Augmented Reality will support the space crew at handling intra-vehicular displays and control items in a natural manner. We present an experiment that has investigated the impact of hyper- and microgravity while performing selection tasks in a head-mounted Augmented Reality environment. We were interested in the correlation between the human body frame of reference and haptic feedback by precise pointing movements towards a target. To evaluate sensorimotoric coordination and workload we performed a comparative user study under parabolic flight conditions. In a within-subject design we evaluated different placement configurations of a virtual keyboard. The objective measures showed a significant requirement of haptic feedback. 

Journal Article
TL;DR: The objective is to allow researchers and users to create their HPC programs in a more efficient way, with greater reliance on their functionality and achieving a reduction of time, effort and cost in the processes of development and maintenance.
Abstract: Scientific researchers face critical challenges which require an increased role of High-Performance Computing (HPC). In many cases, these users, who are specialists in their fields of action, have no previous training or the required skills to face them, or just want to compile and run their programming codes as soon as possible. Sometimes this leads to the risk of being counterproductive in terms of efficiency, because after all, researchers may have to wait longer for the final result, due to a wrong programming model, wrong software architecture, or even errors in the parallelization of sequential code. However, there is a clear lack of approaches with specific methodologies or optimal working environments for the development of specific HPC software systems. Moreover, although there are several frameworks based on Aspect-Oriented and Component-Based Programming for supercomputing, they are focused on the design and implementation phases, while none is based on the reuse of components from the earliest stages of the development, which are defined in the Requirements Engineering. The aim of this proposal is to provide new solutions for the open challenges in high-performance computing, through a methodology and a new framework based on aspect-oriented components for the development of scientific applications for HPC environments. The objective is to allow researchers and users to create their HPC programs in a more efficient way, with greater reliance on their functionality and achieving a reduction of time, effort and cost in the processes of development and maintenance, through the reuse of components (with already developed and tested parallel source codes) from the earliest stages of the development.

Journal Article
TL;DR: This paper provide proposed OCR solution of Sindhi Character Recognition using Artificial Neural Networks (ANNs) and exposed major alphabet differences between Sindhi and Arabic languages with OCR perspective.
Abstract: This paper provide proposed OCR solution of Sindhi Character Recognition using Artificial Neural Networks (ANNs) and exposed major alphabet differences between Sindhi and Arabic languages with OCR perspective. Huge literature is available in hard copy format and needs to convert into soft copy format so that everyone can access and perform searching to achieve desired needs from Sindhi literature. Sindhi language is very rich language contains fifty two characters and it has ability to merge other languages words in word list. With comparison of other Unicode character languages, Sindhi languages characters having differences in terms of “Shape”, “cursive style”, and “position of character “ . These behaviours guide to present linguistic information and increase difficulties in writing and printing also create more complexities for document digitalization (OCR). Recognition system take Sindhi characters as an input from drawing control or used specific “MB Lateefi “for character input. Given input then snip out and converted into define size with horizontal and vertical attributes. Proposed system’s training process used unsupervised learning processes which randomly change weight of matrix more nearer to the input. Results are achieved when weight comes nearer equal to the values given through input layer and training process stops. Similarly at every input character, one neuron will become as winner neuron

Journal Article
TL;DR: The research findings highlighted that the technological, people and organisational factors affect differently knowledge workers at the junior, middle and senior levels, and most users agree that POKM is a new knowledge acquisition enabler.
Abstract: Being competitive in a greater global market eco-system requires the support of knowledge. Hence, the importance to support knowledge sharing and reuse as part of the services provided by knowledge management systems is important for any organizations. This research is an exploratory study and analysis on Knowledge Management System (KMS) in a shared services company. This study focused on how an IT shared services company adopts a knowledge management system by identifying the problems of KMS implementation, employees' perception towards KMS and potential improvements in order to achieve the objectives of the company and to be able to stay competitive through knowledge sharing and reuse. Due to many unknown factors, the research undertaken and presented in this paper conduct analysis on the outcomes of the investigation. The study focuses on the knowledge management system implemented in the research subject using case study approach. Questionnaire survey was used as the data acquisition instrument to solicit employees’ feedback on different aspects of the system, organization motivational and the standard operation procedures in the company. Based on their daily activities, employees’ jobs were grouped into different “task group”. Results have shown that KMS and its use depend on the defined “task group”. From the analysis of the data compiled, visual presentations were generated and a number of problems and usage patterns were identified which led to a better understanding of the usage of KMS. The findings of the research are helpful to practitioners and researchers. The use of KMS by employees based on job categories in the IT Shared Services company can be used as fundamental understanding for future project.

Journal Article
TL;DR: A literature survey of the current state of the art regarding building a verifiable file system for flash memory to revisit its success or failure status through the present literature survey.
Abstract: In this article, we give a literature survey of the current state of the art regarding building a verifiable file system.The 15 year old grand challenge in software verification proposed by Hoare [ Hoa03 ].Later on it was refined by Joshi and Holzmann in [ JH07 ] to a mini-challenge to build a small verifiable file system for flash memory. Since it has been around 5 years after defining this mini-challenge, therefore, it is important to revisit its success or failure status through the present literature survey.

Journal Article
TL;DR: A strong association is discovered, with 100% sensitivity, between hospital participation in multi-institutional quality improvement collaboratives during or before 2002, and changes in the risk-adjusted rates of mortality and morbidity observed after a 1-2 year lag.
Abstract: Artificial intelligence, genetic algorithm, knowledge discovery, pattern recognition, Abstract We introduce a new method for exploratory analysis of large data sets with time-varying features, where the aim is to automatically discover novel relationships between features (over some time period) that are predictive of any of a number of time-varying outcomes (over some other time period). Using a genetic algorithm, we co-evolve (i) a subset of predictive features, (ii) which attribute will be predicted (iii) the time period over which to assess the predictive features, and (iv) the time period over which to assess the predicted attribute. After validating the method on 15 synthetic test problems, we used the approach for exploratory analysis of a large healthcare network data set. We discovered a strong association, with 100% sensitivity, between hospital participation in multi-institutional quality improvement collaboratives during or before 2002, and changes in the risk-adjusted rates of mortality and morbidity observed after a 1-2 year lag. The proposed approach is a potentially powerful and general tool for exploratory analysis of a wide range of time-series data sets. 

Journal Article
TL;DR: This paper presents closer an already accepted set of interactions to manage rotation of solids and evaluates its pedagogic benefit on a test group of learners aged 9 to 15 and proposes a protocol based on mathematics didactic and pedagogy.
Abstract: Tablets with touch-screens, multi-touch interfaces and various sensors are becoming increasingly common. More and more schools are testing them with their pupils in the hope of bringing pedagogic benefits. Thanks to this new type of device, new sets of interactions can be thought of. Many studies have tested user reception of innovative interactions. At the present time, pedagogic benefits can be evaluated to resolve 3D geometry problems. In this paper, we present a categorization of interactions in 3D geometry learning context. We present closer an already accepted set of interactions to manage rotation of solids and we evaluated its pedagogic benefit on a test group of learners aged 9 to 15. We propose a protocol based on mathematics didactic and pedagogy. We compared the set of interactions with classic sheets of paper and solids. Our results show that using our set of interactions increase significantly good answers.

Journal Article
TL;DR: This paper presents a protocol for permutation routing, which is secure, fault tolerant and energy-efficient, and the first protocol that provides such a QoS per permutations routing.
Abstract: In wireless sensor networks (WSNs), security and economy of energy are two important and necessary aspects to consider Particularly, security helps to ensure that such a network is not subject to attacks that involve reading, modification or destruction of information This paper presents a protocol for permutation routing, which is secure, fault tolerant and energy-efficient The proposed protocol is based on two main principles First, the use of a heterogeneous hierarchical clustered structure to assign the most important roles to the sensors having the most energy, in order to ensure the protection and routing of data items Second, the use of multiple processes based on this structure to ensure, regardless of network and sensors status, no data is lost and that a data item from a point A to a point B always arrives safety This is the first protocol that provides such a QoS per permutation routing

Journal Article
TL;DR: A visibility algorithm for a 3D urban environment, consisting of basic shape vocabulary with a box as the basic structure, based on an analytic solution for basic building structures as a single box is introduced.
Abstract: This paper presents a unique solution to the visibility problem in 3D urban environments generated by procedural modeling. We shall introduce a visibility algorithm for a 3D urban environment, consisting of mass modeling shapes. Mass modeling consists of basic shape vocabulary with a box as the basic structure. Using boxes as simple mass model shapes, one can generate basic building blocks such as L, H, U and T shapes, creating a complex urban environment model computing visible parts. Visibility analysis is based on an analytic solution for basic building structures as a single box. The algorithm quickly generates the visible surfaces' boundary of a single building and, consequently, its visible pyramid volume. Using simple geometric operations of projections and intersections between these visible pyramid volumes, hidden surfaces between buildings are rapidly computed. Real urban environment from Boston, MA, approximated to the 3D basic shape vocabulary model demonstrates our approach. This paper also include unique concept of automatic approximated visibility analysis from point clouds data using RANSAC and Kalman Filter methods. We extend the analytic visibility solution for cylinder and sphere objects and presents automatic detection and objects prediction with visibility analysis from point clouds data set .

Journal Article
TL;DR: In this paper, the authors proposed a self-adaptive reverse body bias (RBB) voltage to minimize leakage currents in modern nano-scale CMOS technology, and the proposed circuit has been designed and tested using 32nm bulk CMOS at 25oC under a supply voltage of less than 1V.
Abstract: This paper presents techniques to determine the optimal reverse body bias (RBB) voltage to minimize leakage currents in modern nano-scale CMOS technology. The proposed self-adaptive RBB system finds the optimum reverse body bias voltage for minimal leakage power adaptively by comparing subthreshold leakage current (I SUBTH ), gate tunneling leakage (I GATE ), and band-to-band tunneling leakage currents (I BTBT ) in standby mode. The proposed circuit has been designed and tested using 32nm bulk CMOS technology at 25oC under a supply voltage of less than 1V. The optimal RBB was achieved at -0.38V with 15% error in the test case of the paper, and the simulation result shows that it is possible to reduce the total leakage current significantly as much as 69% of the total leakage using the proposed circuit techniques.

Journal Article
TL;DR: A set of applications, each with a voice interface, which were implemented on various technological platforms so as to investigate the process of instantiating speech engines produced with FIVE.
Abstract: Oral communication is, without the shadow of a doubt, the most natural form of human communication. By virtue of human-computer interaction becoming more and more common, a natural demand has arisen for systems with a voice-based interface. This paper presents a set of applications, each with a voice interface, which were implemented on various technological platforms so as to investigate the process of instantiating speech engines produced with FIVE. The experiments undertaken presented some technical restrictions, which, however, did not prevent the applications from being run.

Journal Article
TL;DR: This paper proposes a virtual impedance method, which is to generate a virtual force between mobile system and environment using exteroceptive localization tools, to monitor a smart wheelchair through wireless network based on 802.11 standards.
Abstract: Developing new systems for disabled and elderly people’s assistance requires a multidisciplinary approach based on new technologies according to users’ needs. Intelligent wheelchairs can help this category of people to live more independently. One of the main trends in smart wheelchairs design is how to ensure a reliable remote tele-operation task with obstacle. In this direction, various methods based on impedance control, potential field and edge detection have been investigated. These methods have advantage of making the fast motion planning for nearby obstacles, but with a shortcoming of getting into a local minimum where the attractive and repulsive forces are equal. To overcome the local minimum, a virtual impedance method is proposed, where a free vector is added to the repulsive force. The principle is to generate a virtual force between mobile system and environment using exteroceptive localization tools. In this paper, we are interested by this kind of approach to monitor a smart wheelchair through wireless network based on 802.11 standards. Virtual forces are translated to a human operator through a joystick as tactile information. To illustrate the efficiency of the proposed approach, experimentation on a smart wheelchair developed in our Lab. called LIASD-WheelChair has been performed.