scispace - formally typeset
Search or ask a question
Author

Tekin Meriçli

Bio: Tekin Meriçli is an academic researcher from Boğaziçi University. The author has contributed to research in topics: Robot & Mobile robot. The author has an hindex of 9, co-authored 28 publications receiving 267 citations. Previous affiliations of Tekin Meriçli include University of Texas at Austin & Carnegie Mellon University.

Papers
More filters
Journal ArticleDOI
TL;DR: This work presents an experience-based push-manipulation approach that enables the robot to acquire experimental models regarding how pushable real world objects with complex 3D structures move in response to various pushing actions and demonstrates the superiority of the achievable planning and execution concept through safe and successful push- manipulation of a variety of passively mobile pushable objects.
Abstract: In a realistic mobile push-manipulation scenario, it becomes non-trivial and infeasible to build analytical models that will capture the complexity of the interactions between the environment, each of the objects, and the robot as the variety of objects to be manipulated increases. We present an experience-based push-manipulation approach that enables the robot to acquire experimental models regarding how pushable real world objects with complex 3D structures move in response to various pushing actions. These experimentally acquired models can then be used either (1) for trying to track a collision-free guideline path generated for the object by reiterating pushing actions that result in the best locally-matching object trajectories until the goal is reached, or (2) as building blocks for constructing achievable push plans via a Rapidly-exploring Random Trees variant planning algorithm we contribute and executing them by reiterating the corresponding trajectories. We extensively experiment with these two methods in a 3D simulation environment and demonstrate the superiority of the achievable planning and execution concept through safe and successful push-manipulation of a variety of passively mobile pushable objects. Additionally, our preliminary tests in a real world scenario, where the robot is asked to arrange a set of chairs around a table through achievable push-manipulation, also show promising results despite the increased perception and action uncertainty, and verify the validity of our contributed method.

62 citations

Journal ArticleDOI
TL;DR: Cross-subject evaluations, as well as experiments under adverse conditions, show that the proposed image-based method for establishing joint attention between an experimenter and a robot generalizes well and achieves rapid gaze estimation for establish joint attention.
Abstract: Joint attention, which is the ability of coordination of a common point of reference with the communicating party, emerges as a key factor in various interaction scenarios. This paper presents an image-based method for establishing joint attention between an experimenter and a robot. The precise analysis of the experimenter's eye region requires stability and high-resolution image acquisition, which is not always available. We investigate regression-based interpolation of the gaze direction from the head pose of the experimenter, which is easier to track. Gaussian process regression and neural networks are contrasted to interpolate the gaze direction. Then, we combine gaze interpolation with image-based saliency to improve the target point estimates and test three different saliency schemes. We demonstrate the proposed method on a human-robot interaction scenario. Cross-subject evaluations, as well as experiments under adverse conditions (such as dimmed or artificial illumination or motion blur), show that our method generalizes well and achieves rapid gaze estimation for establishing joint attention.

54 citations

Proceedings ArticleDOI
23 Oct 2009
TL;DR: The evolution of cognitive skills in infants and then the adaptation of cognitive development patterns in robotic design are described, following a developmental trajectory not unlike infants.
Abstract: This paper elaborates on mechanisms for establishing visual joint attention for the design of robotic agents that learn through natural interfaces, following a developmental trajectory not unlike infants. We describe first the evolution of cognitive skills in infants and then the adaptation of cognitive development patterns in robotic design. A comprehensive outlook for cognitively inspired robotic design schemes pertaining to joint attention is presented for the last decade, with particular emphasis on practical implementation issues. A novel cognitively inspired joint attention fixation mechanism is defined for robotic agents.

22 citations

Proceedings Article
01 Jan 2012
TL;DR: This paper contributes the algorithms and results of the successful deployment of a service mobile robot agent, CoBot, in the authors' multi-floor office environment, and presents the details of such challenging deployment, in particular the effective real-time depth-camera based localization and navigation algorithms, the symbiotic human-robot interaction approach, and the multitask dynamic planning and scheduling algorithm.
Abstract: Although since the days of the Shakey robot, there have been a rich variety of mobile robots, we realize that there were still no general autonomous, unsupervised mobile robots servicing users in our buildings. In this paper, we contribute the algorithms and results of our successful deployment of a service mobile robot agent, CoBot, in our multi-floor office environment. CoBot accepts requests from users, autonomously navigates between floors of the building, and asks for help when needed in a symbiotic relationship with the humans in its environment. We present the details of such challenging deployment, in particular the effective real-time depth-camera based localization and navigation algorithms, the symbiotic human-robot interaction approach, and the multitask dynamic planning and scheduling algorithm. We conclude with a comprehensive analysis of the extensive results of the last two weeks of daily CoBot runs for a total of more than 8.7 km, performing a large varied set of user requests.

17 citations

Proceedings ArticleDOI
29 Oct 2007
TL;DR: The proposed model was able to mimic and predict the dynamic behavior of the HH simulator under novel stimulation conditions; it can be used to extract the dynamics (in vivo or in vitro) of a neuron without any prior knowledge of its physiology.
Abstract: A single biological neuron is able to perform complex computations that are highly nonlinear in nature, adaptive, and superior to the perceptron model. A neuron is essentially a nonlinear dynamical system. Its state depends on the interactions among its previous states, its intrinsic properties, and the synaptic input it receives. These factors are included in Hodgkin-Huxley (HH) model, which describes the ionic mechanisms involved in the generation of an action potential. This paper proposes training of an artificial neural network to identify and model the physiological properties of a biological neuron, and mimic its input-output mapping. An HH simulator was implemented to generate the training data. The proposed model was able to mimic and predict the dynamic behavior of the HH simulator under novel stimulation conditions; hence, it can be used to extract the dynamics (in vivo or in vitro) of a neuron without any prior knowledge of its physiology. Such a model can in turn be used as a tool for controlling a neuron in order to study its dynamics for further analysis.

17 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Proceedings Article
05 Dec 2016
TL;DR: In this paper, the authors investigate an experiential learning paradigm for acquiring an internal model of intuitive physics, by jointly estimating forward and inverse models of dynamics, which can then be used for multi-step decision making.
Abstract: We investigate an experiential learning paradigm for acquiring an internal model of intuitive physics. Our model is evaluated on a real-world robotic manipulation task that requires displacing objects to target locations by poking. The robot gathered over 400 hours of experience by executing more than 100K pokes on different objects. We propose a novel approach based on deep neural networks for modeling the dynamics of robot's interactions directly from images, by jointly estimating forward and inverse models of dynamics. The inverse model objective provides supervision to construct informative visual features, which the forward model can then predict and in turn regularize the feature space for the inverse model. The interplay between these two objectives creates useful, accurate models that can then be used for multi-step decision making. This formulation has the additional benefit that it is possible to learn forward models in an abstract feature space and thus alleviate the need of predicting pixels. Our experiments show that this joint modeling approach outperforms alternative methods.

253 citations

Journal ArticleDOI
TL;DR: Based on their importance for both early development and for building autonomous robots that have humanlike abilities, imitation, joint attention and interactive engagement are key issues in the development of assistive robotics for autism and must be the focus of further research.
Abstract: Recently, there have been considerable advances in the research on innovative information communication technology (ICT) for the education of people with autism. This review focuses on two aims: (1) to provide an overview of the recent ICT applications used in the treatment of autism and (2) to focus on the early development of imitation and joint attention in the context of children with autism as well as robotics. There have been a variety of recent ICT applications in autism, which include the use of interactive environments implemented in computers and special input devices, virtual environments, avatars and serious games as well as telerehabilitation. Despite exciting preliminary results, the use of ICT remains limited. Many of the existing ICTs have limited capabilities and performance in actual interactive conditions. Clinically, most ICT proposals have not been validated beyond proof of concept studies. Robotics systems, developed as interactive devices for children with autism, have been used to assess the child’s response to robot behaviors; to elicit behaviors that are promoted in the child; to model, teach and practice a skill; and to provide feedback on performance in specific environments (e.g., therapeutic sessions). Based on their importance for both early development and for building autonomous robots that have humanlike abilities, imitation, joint attention and interactive engagement are key issues in the development of assistive robotics for autism and must be the focus of further research.

243 citations

01 Jan 1954
TL;DR: The author's style is worthy of his subject; it is always clear and assured; but it also scintillates with many a beautifully turned phrase, or allusion, which explain or underline, but never distort, his scientific statements.
Abstract: It is not possible in the space of a brief notice here to do any s?rt of justice to this remarkable book. Its author's name is, of course, enough to indicate that it deals with electroencephalography, but no one should suppose that this is all it does. Dr. Walter discusses the information this new technique has made available on the process of learning, and in his phrase on "intimations of personality." This is, of course, a long trek from the view, still all too common, that E.E.G. can be used to detect epileptics and a few psychopaths. The whole mechanism of the brain has been opened to fresh study; and the first fruits are being presented here. The author's style is worthy of his subject; it is always clear and assured; but it also scintillates with many a beautifully turned phrase, or allusion, which explain or underline, but never distort, his scientific statements; even doggerel has its place in clarification. Many will find his last chapter most stimulating of all : "The prain tomorrow." We see to our horror the pace of life increasing,

200 citations

Journal ArticleDOI
TL;DR: A novel video captioning framework, which integrates bidirectional long-short term memory (BiLSTM) and a soft attention mechanism to generate better global representations for videos as well as enhance the recognition of lasting motions in videos.
Abstract: Video captioning has been attracting broad research attention in the multimedia community. However, most existing approaches heavily rely on static visual information or partially capture the local temporal knowledge (e.g., within 16 frames), thus hardly describing motions accurately from a global view. In this paper, we propose a novel video captioning framework, which integrates bidirectional long-short term memory (BiLSTM) and a soft attention mechanism to generate better global representations for videos as well as enhance the recognition of lasting motions in videos. To generate video captions, we exploit another long-short term memory as a decoder to fully explore global contextual information. The benefits of our proposed method are two fold: 1) the BiLSTM structure comprehensively preserves global temporal and visual information and 2) the soft attention mechanism enables a language decoder to recognize and focus on principle targets from the complex content. We verify the effectiveness of our proposed video captioning framework on two widely used benchmarks, that is, microsoft video description corpus and MSR-video to text, and the experimental results demonstrate the superiority of the proposed approach compared to several state-of-the-art methods.

200 citations