scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Autonomous Mental Development in 2013"


Journal ArticleDOI
TL;DR: Investigation of maternal contingent responsiveness to infant object exploration in 190 mother-infant pairs from diverse cultural communities found mothers' responses to infants were didactic and multimodal.
Abstract: We examined maternal contingent responsiveness to infant object exploration in 190 mother-infant pairs from diverse cultural communities. Dyads were video-recorded during book-sharing and play when infants were 14 mo. Researchers coded the temporal onsets and offsets of infant and mother object exploration and mothers' referential (e.g., “That's a bead”) and regulatory (e.g., “Stop it”) language. The times when infant or mother were neither exploring objects nor communicating were classified as “off task.” Sequential analysis was used to examine whether certain maternal behaviors were more (or less) likely to follow infant object exploration relative to chance, to one another, and to times when infants were off task. Mothers were more likely to explore objects and use referential language in response to infant object exploration than to use regulatory language or be off task, and maternal behaviors were reduced in the context of infants being off task. Additionally, mothers coordinated their object exploration with referential language specifically; thus, mothers' responses to infants were didactic and multimodal. Infant object exploration elicits reciprocal object exploration and informative verbal input from mothers, illustrating the active role infants play in their social experiences.

86 citations


Journal ArticleDOI
TL;DR: It is shown that by using as temporal anchor points those moments where two objects touch or un-touch each other during a manipulation one can define a relatively small tree-like manipulation ontology.
Abstract: Humans can perform a multitude of different actions with their hands (manipulations). In spite of this, so far there have been only a few attempts to represent manipulation types trying to understand the underlying principles. Here we first discuss how manipulation actions are structured in space and time. For this we use as temporal anchor points those moments where two objects (or hand and object) touch or un-touch each other during a manipulation. We show that by this one can define a relatively small tree-like manipulation ontology. We find less than 30 fundamental manipulations. The temporal anchors also provide us with information about when to pay attention to additional important information, for example when to consider trajectory shapes and relative poses between objects. As a consequence a highly condensed representation emerges by which different manipulations can be recognized and encoded. Examples of manipulations recognition and execution by a robot based on this representation are given at the end of this study.

78 citations


Journal ArticleDOI
TL;DR: Current knowledge on tool use development in infants in infants is reviewed in order to provide relevant information to cognitive developmental roboticists seeking to design artificial systems that develop tool use abilities.
Abstract: In this paper, we review current knowledge on tool use development in infants in order to provide relevant information to cognitive developmental roboticists seeking to design artificial systems that develop tool use abilities. This information covers: 1) sketching developmental pathways leading to tool use competences; 2) the characterization of learning and test situations; 3) the crystallization of seven mechanisms underlying the developmental process; and 4) the formulation of a number of challenges and recommendations for designing artificial systems that exhibit tool use abilities in complex contexts.

76 citations


Journal ArticleDOI
TL;DR: It is argued that the beginnings of joint intentionality can be traced to the practice of embedding the child's actions into culturally shaped episodes and as action becomes coaction, an infant's perception becomes tuned to interaction affordances.
Abstract: Are higher-level cognitive processes the only way that purposefulness can be introduced into the human interaction? In this paper, we provide a microanalysis of early mother-child interactions and argue that the beginnings of joint intentionality can be traced to the practice of embedding the child's actions into culturally shaped episodes. As action becomes coaction, an infant's perception becomes tuned to interaction affordances.

72 citations


Journal ArticleDOI
Shingo Murata1, Jun Namikawa, Hiroaki Arie1, Shigeki Sugano1, Jun Tani2 
TL;DR: It was shown that a humanoid robot using the proposed network can learn to reproduce latent stochastic structures hidden in fluctuating tutoring trajectories and this learning scheme is essential for the acquisition of sensory-guided skilled behavior.
Abstract: This study proposes a novel type of dynamic neural network model that can learn to extract stochastic or fluctuating structures hidden in time series data. The network learns to predict not only the mean of the next input state, but also its time-dependent variance. The training method is based on maximum likelihood estimation by using the gradient descent method and the likelihood function is expressed as a function of the estimated variance. Regarding the model evaluation, we present numerical experiments in which training data were generated in different ways utilizing Gaussian noise. Our analysis showed that the network can predict the time-dependent variance and the mean and it can also reproduce the target stochastic sequence data by utilizing the estimated variance. Furthermore, it was shown that a humanoid robot using the proposed network can learn to reproduce latent stochastic structures hidden in fluctuating tutoring trajectories. This learning scheme is essential for the acquisition of sensory-guided skilled behavior.

69 citations


Journal ArticleDOI
TL;DR: This study considered both the semantic and temporal dimensions of responsiveness on a single cohort while controlling for level of parental education and the overall amount of communication on the part of both the caregiver and the infant.
Abstract: Maternal responsiveness has been positively related with a range of socioemotional and cognitive outcomes including language. A substantial body of research has explored different aspects of verbal responsiveness. However, perhaps because of the many ways in which it can be operationalized, there is currently a lack of consensus around what type of responsiveness is most helpful for later language development. The present study sought to address this problem by considering both the semantic and temporal dimensions of responsiveness on a single cohort while controlling for level of parental education and the overall amount of communication on the part of both the caregiver and the infant. We found that only utterances that were both semantically appropriate and temporally linked to an infant vocalization were related to infant expressive vocabulary at 18 mo.

67 citations


Journal ArticleDOI
TL;DR: This research identifies the requirements for cooperation, and presents a cognitive system that implements these requirements, and demonstrates the system's ability to allow a Nao humanoid robot to learn a nontrivial cooperative task in real-time.
Abstract: One of the defining characteristics of human cognition is our outstanding capacity to cooperate. A central requirement for cooperation is the ability to establish a “shared plan”—which defines the interlaced actions of the two cooperating agents—in real time, and even to negotiate this shared plan during its execution. In the current research we identify the requirements for cooperation, extending our earlier work in this area. These requirements include the ability to negotiate a shared plan using spoken language, to learn new component actions within that plan, based on visual observation and kinesthetic demonstration, and finally to coordinate all of these functions in real time. We present a cognitive system that implements these requirements, and demonstrate the system's ability to allow a Nao humanoid robot to learn a nontrivial cooperative task in real-time. We further provide a concrete demonstration of how the real-time learning capability can be easily deployed on a different platform, in this case the iCub humanoid. The results are considered in the context of how the development of language in the human infant provides a powerful lever in the development of cooperative plans from lower-level sensorimotor capabilities.

55 citations


Journal ArticleDOI
TL;DR: This modeling study compared the competence of the LGMD and the DSNs, and investigates the cooperation of the two neural vision systems for collision recognition via artificial evolution, and suggests that theLGMD neural network could be the ideal model to be realized in hardware for collisions recognition.
Abstract: Ability to detect collisions is vital for future robots that interact with humans in complex visual environments. Lobula giant movement detectors (LGMD) and directional selective neurons (DSNs) are two types of identified neurons found in the visual pathways of insects such as locusts. Recent modeling studies showed that the LGMD or grouped DSNs could each be tuned for collision recognition. In both biological and artificial vision systems, however, which one should play the collision recognition role and the way the two types of specialized visual neurons could be functioning together are not clear. In this modeling study, we compared the competence of the LGMD and the DSNs, and also investigate the cooperation of the two neural vision systems for collision recognition via artificial evolution. We implemented three types of collision recognition neural subsystems - the LGMD, the DSNs and a hybrid system which combines the LGMD and the DSNs subsystems together, in each individual agent. A switch gene determines which of the three redundant neural subsystems plays the collision recognition role. We found that, in both robotics and driving environments, the LGMD was able to build up its ability for collision recognition quickly and robustly therefore reducing the chance of other types of neural networks to play the same role. The results suggest that the LGMD neural network could be the ideal model to be realized in hardware for collision recognition.

49 citations


Journal ArticleDOI
TL;DR: The authors focus on the use of the emotion of fear as an adaptive mechanism to avoid dangerous situations and prove the advantages of considering fear in the decision making system by comparing the robot's performance with and without fear.
Abstract: Currently artificial emotions are being extensively used in robots. Most of these implementations are employed to display affective states. Nevertheless, their use to drive the robot's behavior is not so common. This is the approach followed by the authors in this work. In this research, emotions are not treated in general but individually. Several emotions have been implemented in a real robot, but in this paper, authors focus on the use of the emotion of fear as an adaptive mechanism to avoid dangerous situations. In fact, fear is used as a motivation which guides the behavior during specific circumstances. Appraisal of fear is one of the cornerstones of this work. A novel mechanism learns to identify the harmful circumstances which cause damage to the robot. Hence, these circumstances elicit the fear emotion and are known as fear releasers. In order to prove the advantages of considering fear in our decision making system, the robot's performance with and without fear are compared and the behaviors are analyzed. The robot's behaviors exhibited in relation to fear are natural, i.e., the same kind of behaviors can be observed on animals. Moreover, they have not been preprogrammed, but learned by real inter actions in the real world. All these ideas have been implemented in a real robot living in a laboratory and interacting with several items and people.

31 citations


Journal ArticleDOI
TL;DR: Experiments on walking revealed a remarkably high adaptation capability of tacit learning in terms of gait generation, power consumption and robustness of a 36DOF humanoid robot compared with that of conventional control architectures and that of human beings.
Abstract: The capability of adapting to unknown environmental situations is one of the most salient features of biological regulations. This capability is ascribed to the learning mechanisms of biological regulatory systems that are totally different from the current artificial machine-learning paradigm. We consider that all computations in biological regulatory systems result from the spatial and temporal integration of simple and homogeneous computational media such as the activities of neurons in brain and protein-protein interactions in intracellular regulations. Adaptation is the outcome of the local activities of the distributed computational media. To investigate the learning mechanism behind this computational scheme, we proposed a learning method that embodies the features of biological systems, termed tacit learning. In this paper, we elaborate this notion further and applied it to bipedal locomotion of a 36DOF humanoid robot in order to discuss the adaptation capability of tacit learning comparing with that of conventional control architectures and that of human beings. Experiments on walking revealed a remarkably high adaptation capability of tacit learning in terms of gait generation, power consumption and robustness.

24 citations


Journal ArticleDOI
TL;DR: This paper outlines a systems approach for characterizing fine-grained temporal dynamics of developing social interaction, and provides best practices for capturing, coding, and analyzing interaction activity on multiple -temporal scales.
Abstract: Infants are biologically prepared to learn complex behaviors by interacting in dynamic, responsive social environments. Although the importance of interactive social experiences has long been recognized, current methods for studying complex multimodal interactions are lagging. This paper outlines a systems approach for characterizing fine-grained temporal dynamics of developing social interaction. We provide best practices for capturing, coding, and analyzing interaction activity on multiple -temporal scales, from fractions of seconds (e.g., gaze shifts), to minutes (e.g., coordinated play episodes), to weeks or months (e.g., developmental change).

Journal ArticleDOI
TL;DR: A neurorobotic model that develops reaching and grasping skills analogous to those displayed by infants during their early developmental stages is presented, taking into account the reflex behaviors initially possessed by infants and the neurophysiological and cognitive maturation occurring during the relevant developmental period.
Abstract: We present a neurorobotic model that develops reaching and grasping skills analogous to those displayed by infants during their early developmental stages. The learning process is realized in an incremental manner, taking into account the reflex behaviors initially possessed by infants and the neurophysiological and cognitive maturation occurring during the relevant developmental period. The behavioral skills acquired by the robots closely match those displayed by children. The comparison between incremental and nonincremental experiments demonstrates how some of the limitations characterizing the initial developmental phase channel the learning process toward better solutions.

Journal ArticleDOI
TL;DR: A spike-based IP model/adaptation rule is proposed for an integrate-and-fire (IF) neuron to model this biological phenomenon and helps an IF neuron to keep its firing activity in a relatively “low but not too low” level.
Abstract: The discovery of neuronal intrinsic plasticity (IP) processes which persistently modify a neuron's excitability necessitates a new concept of the neuronal plasticity mechanism and may profoundly influence our ideas on learning and memory. In this paper, we propose a spike-based IP model/adaptation rule for an integrate-and-fire (IF) neuron to model this biological phenomenon. By utilizing spikes denoted by Dirac delta functions rather than computing instantaneous firing rates for the time-dependent stimulus, this simple adaptation rule adjusts two parameters of an individual IF neuron to modify its excitability. As a result, this adaptation rule helps an IF neuron to keep its firing activity in a relatively “low but not too low” level and makes the spike-count distributions computed with adjusted window sizes similar to the experimental results.

Journal ArticleDOI
TL;DR: This model of word learning based on interacting self-organizing maps that represent the auditory and visual modalities, respectively, argues that the learning mechanism introduced in this model could play a role in the facilitation of infants' categorization through verbal labeling.
Abstract: Infancy research demonstrating a facilitation of visual category formation in the presence of verbal labels suggests that infants' object categories and words develop interactively. This contrasts with the notion that words are simply mapped “onto” previously existing categories. To investigate the computational foundations of a system in which word and object categories develop simultaneously and in an interactive fashion, we present a model of word learning based on interacting self-organizing maps that represent the auditory and visual modalities, respectively. While other models of lexical development have employed similar dual-map architectures, our model uses active Hebbian connections to propagate activation between the visual and auditory maps during learning. Our results show that categorical perception emerges from these early audio-visual interactions in both domains. We argue that the learning mechanism introduced in our model could play a role in the facilitation of infants' categorization through verbal labeling.

Journal ArticleDOI
TL;DR: Through incremental learning and autonomous practice, the developmental network (DN) theory lumps (abstracts) infinitely many temporal context sequences into a single equivalent state, and a skill learned under one sequence is automatically transferred to other infinitely many state-equivalent sequences in the future without the need for explicit learning.
Abstract: Informed by brain anatomical studies, we present the developmental network (DN) theory on brain-like temporal information processing. The states of the brain are at its effector end, emergent and open. A finite automaton (FA) is considered an external symbolic model of brain's temporal behaviors, but the FA uses handcrafted states and is without “internal” representations. The term “internal” means inside the network “skull.” Using action-based state equivalence and the emergent state representations, the time driven processing of DN performs state-based abstraction and state-based skill transfer. Each state of DN, as a set of actions, is openly observable by the external environment (including teachers). Thus, the external environment can teach the state at every frame time. Through incremental learning and autonomous practice, the DN lumps (abstracts) infinitely many temporal context sequences into a single equivalent state. Using this state equivalence, a skill learned under one sequence is automatically transferred to other infinitely many state-equivalent sequences in the future without the need for explicit learning. Two experiments are shown as examples: The experiments for video processing showed almost perfect recognition rates in disjoint tests. The experiment for text language, using corpora from the Wall Street Journal, treated semantics and syntax in a unified interactive way.

Journal ArticleDOI
TL;DR: A conceptual model for imitation learning to abstract spatio-temporal demonstrations based on their perceptual and functional characteristics and results on a humanoid robot show the efficacy of the proposed model.
Abstract: This paper presents a conceptual model for imitation learning to abstract spatio-temporal demonstrations based on their perceptual and functional characteristics. To this end, the concepts are represented by prototypes irregularly scattered in the perceptual space but sharing the same functionality. Functional similarity between demonstrations is understood by reinforcements of the teacher or recognizing the effects of actions. Abstraction, concept acquisition, and self-organization of prototypes are performed through incremental and gradual learning algorithms. In these algorithms, hidden Markov models are used to prototype perceptually similar demonstrations. In addition, a mechanism is introduced to integrate perceptions of different modalities for multimodal concept recognition. Performance of the proposed model is evaluated in two different tasks. The first one is imitation learning of some hand gestures through interaction with the teachers. In this task, the perceptions from different modalities, including vision, motor, and audition, are used in a variety of experiments. The second task is to learn a set of actions by recognizing their emotional effects. Results of the experiments on a humanoid robot show the efficacy of our model for conceptual imitation learning.

Journal ArticleDOI
TL;DR: A reward-based learning framework is proposed that achieves an efficient strategy for distributing the constrained system resources among modules to keep relevant environmental information up to date for higher level task learning and executing mechanisms in the system.
Abstract: Real world environments are so dynamic and unpredictable that a goal-oriented autonomous system performing a set of tasks repeatedly never experiences the same situation even though the task routines are the same. Hence, manually designed solutions to execute such tasks are likely to fail due to such variations. Developmental approaches seek to solve this problem by implementing local learning mechanisms to the systems that can unfold capabilities to achieve a set of tasks through interactions with the environment. However, gathering all the information available in the environment for local learning mechanisms to process is hardly possible due to limited resources of the system. Thus, an information acquisition mechanism is necessary to find task-relevant information sources and applying a strategy to update the knowledge of the system about these sources efficiently in time. A modular systems approach may provide a useful structured and formalized basis for that. In such systems different modules may request access to the constrained system resources to acquire information they are tuned for. We propose a reward-based learning framework that achieves an efficient strategy for distributing the constrained system resources among modules to keep relevant environmental information up to date for higher level task learning and executing mechanisms in the system. We apply the proposed framework to a visual attention problem in a system using the iCub humanoid in simulation.

Journal ArticleDOI
TL;DR: It is found that infants may be privy to patterns of information in mothers' gaze which signal action boundaries and particularly highlight action goals, and that these patterns shift based on the age or knowledge state of the learner.
Abstract: When demonstrating objects to young children, parents use specialized action features, called “motionese,” which elicit attention and facilitate imitation. We hypothesized that the timing of mothers' infant-directed eye gaze in such interactions may provide systematic cues to the structure of action. We asked 35 mothers to demonstrate a series of tasks on objects to their 7- and 12-mo-old infants, with three objects affording enabling sequences leading to a salient goal, and three objects affording arbitrary sequences with no goal. We found that mothers' infant-directed gaze was more aligned with action boundary points than expected by chance, and was particularly tightly aligned with the final actions of enabling sequences. For 7- but not 12-mo-olds, mothers spent more time with arbitrary than enabling-sequence objects, and provided especially tight alignment for action initiations relative to completions. These findings suggest that infants may be privy to patterns of information in mothers' gaze which signal action boundaries and particularly highlight action goals, and that these patterns shift based on the age or knowledge state of the learner.

Journal ArticleDOI
TL;DR: The research and theory described here evolved from fine-grained descriptions of early word learning based on videotapes of infants and their families in the US and Mexico, establishing that infants detected referent-word relations best when the speaker used a show gesture.
Abstract: The research and theory described here evolved from fine-grained descriptions of early word learning based on videotapes of infants and their families in the US and Mexico. This naturalistic approach led to theorizing about the perceptual processes underlying the caregiver's role in assisting infants' early word learning. Caregivers educate infants' attention by synchronizing the saying of a word with a dynamic gesture, a show, in which they display the object/referent to the infant. By making this perceptual information prominent, infants can detect an amodal invariant across gesture and speech. Doing so brackets the word and object within the auditory and visual flow of events and constitutes the basis for perceiving them as belonging together. Stemming from the earlier naturalistic work, we designed eye-tracking experiments to test three hypotheses: 1) infants will attend more to an object when the referring word is said if the speaker uses a dynamic, synchronized show gesture, rather than a static or asynchronous gesture; 2) a show gesture will be most effective in drawing attention away from the mouth to the object when the referring word is spoken; and 3) the use of a show gesture will lead to enhanced word learning. These experiments confirmed our hypotheses, establishing that infants detected referent-word relations best when the speaker used a show gesture. These results support the SEED Framework of early language development which delineates how the situated, culturally embodied, emergent, and distributed character of caregiver-infant interaction nurtures communicative behavior. The ability to communicate germinates and takes root during social interaction, as the dynamically-coupled perceiving-and-acting of infants and caregivers forms a continuous loop, each of them unceasingly affecting the other. These findings have implications for the design of cognitive systems in autonomous robots, especially “tutor spotting” and detecting “acoustic packages.”

Journal ArticleDOI
TL;DR: This work shows how distance estimation can be improved autonomously, and finds that actions that, in principle, do not alter the robot's distance to the target are a powerful tool for exposing estimation errors.
Abstract: We investigate how a humanoid robot with a randomly initialized binocular vision system can learn to improve judgments about egocentric distances using limited action and interaction that might be available to human infants. First, we show how distance estimation can be improved autonomously. We consider our approach to be autonomous because the robot learns to accurately estimate distance without a human teacher providing the distances to training targets. We find that actions that, in principle, do not alter the robot's distance to the target are a powerful tool for exposing estimation errors. These errors can be used to train a distance estimator. Furthermore, the simple action used (i.e., neck rotation) does not require high level cognitive processing or fine motor skill. Next, we investigate how interaction with humans can further improve visual distance estimates. We find that human interaction can improve distance estimates for far targets outside of the robot's peripersonal space. This is accomplished by extending our autonomous approach above to integrate additional information provided by a human. Together these experiments suggest that both action and interaction are important tools for improving perceptual estimates.

Journal ArticleDOI
TL;DR: A computational model, mixing concepts and techniques from these two domains, involving a simulated robot learner interacting with a human teacher shows that a general form of imitation learning can allow a learner to discover channels of communication used by an ambiguous teacher, addressing a form of abstract Gavagai problem.
Abstract: We identify a strong structural similarity between the Gavagai problem in language acquisition and the problem of imitation learning of multiple context-dependent sensorimotor skills from human teachers. In both cases, a learner has to resolve concurrently multiple types of ambiguities while learning how to act in response to particular contexts through the observation of a teacher's demonstrations. We argue that computational models of language acquisition and models of motor skill learning by demonstration have so far only considered distinct subsets of these types of ambiguities, leading to the use of distinct families of techniques across two loosely connected research domains. We present a computational model, mixing concepts and techniques from these two domains, involving a simulated robot learner interacting with a human teacher. Proof-of-concept experiments show that: 1) it is possible to consider simultaneously a larger set of ambiguities than considered so far in either domain; and 2) this allows us to model important aspects of language acquisition and motor learning within a single process that does not initially separate what is “linguistic” from what is “nonlinguistic.” Rather, the model shows that a general form of imitation learning can allow a learner to discover channels of communication used by an ambiguous teacher, thus addressing a form of abstract Gavagai problem (ambiguity about which observed behavior is “linguistic”, and in that case which modality is communicative).

Journal ArticleDOI
TL;DR: The contributions of this Special Issue exemplify approaches capturing the microdynamics of interaction to provide us with insights into the adaptation and learning processes.
Abstract: Social learning takes place within an interactional loop. The contributions of this Special Issue exemplify approaches capturing the microdynamics of interaction to provide us with insights into the adaptation and learning processes.

Journal ArticleDOI
TL;DR: This work shows that using computational audiovisual scene analysis (CAVSA), it can be adapted online in free interaction with a number of a priori unknown speakers and is more robust in multiperson scenarios than the state of the art in terms of learning progress.
Abstract: For sound localization, the binaural auditory system of a robot needs audio-motor maps, which represent the relationship between certain audio features and the position of the sound source. This mapping is normally learned during an offline calibration in controlled environments, but we show that using computational audiovisual scene analysis (CAVSA), it can be adapted online in free interaction with a number of a priori unknown speakers. CAVSA enables a robot to understand dynamic dialog scenarios, such as the number and position of speakers, as well as who is the current speaker. Our system does not require specific robot motions and thus can work during other tasks. The performance of online-adapted maps is continuously monitored by computing the difference between online-adapted and offline-calibrated maps and also comparing sound localization results with ground truth data (if available). We show that our approach is more robust in multiperson scenarios than the state of the art in terms of learning progress. We also show that our system is able to bootstrap with a randomized audio-motor map and adapt to hardware modifications that induce a change in audio-motor maps.

Journal ArticleDOI
TL;DR: A reward-mediated model implemented on a NAO humanoid robot is proposed that replicates the main results from this study showing an increase in reaching attempts to nonreachable distances after the onset of walking.
Abstract: Previous research suggests that reaching and walking behaviors may be linked developmentally as reaching changes at the onset of walking. Here we report new evidence on an apparent loss of the distinction between the reachable and nonreachable distances as children start walking. The experiment compared nonwalkers, walkers with help, and independent walkers in a reaching task to targets at varying distances. Reaching attempts, contact, leaning, and communication behaviors were recorded. Most of the children reached for the unreachable objects the first time it was presented. Nonwalkers, however, reached less on the subsequent trials showing clear adjustment of their reaching decisions with the failures. On the contrary, walkers consistently attempted reaches to targets at unreachable distances. We suggest that these reaching errors may result from inappropriate integration of reaching and locomotor actions, attention control and near/far visual space. We propose a reward-mediated model implemented on a NAO humanoid robot that replicates the main results from our study showing an increase in reaching attempts to nonreachable distances after the onset of walking.

Journal Article
TL;DR: The current affiliation within the biography of J.-J. Cabibihan was mistakenly written as Gemalto Singapore, Singapore as discussed by the authors, which is the current affiliation of S. Pramanik.
Abstract: In the above-named article [ibid., vol. 4, no. 4, pp. 305?314, Dec. 2012], the current affiliation within the biography of J.-J. Cabibihan was mistakenly written as Gemalto Singapore, Singapore. That is the current affiliation of S. Pramanik. Dr. Cabibihan's current affiliation is the National University of Singapore.

Journal ArticleDOI
TL;DR: In the above-named article [ibid., vol. 4, no.4, pp. 305-314, Dec. 2012], the current affiliation within the biography of J.-J.
Abstract: In the above-named article [ibid., vol. 4, no. 4, pp. 305-314, Dec. 2012], the current affiliation within the biography of J.-J. Cabibihan was mistakenly written as Gemalto Singapore, Singapore. That is the current affiliation of S. Pramanik. Dr. Cabibihan's current affiliation is the National University of Singapore.