scispace - formally typeset
Search or ask a question
Author

Alexandre Bernardino

Bio: Alexandre Bernardino is an academic researcher from Instituto Superior Técnico. The author has contributed to research in topics: Humanoid robot & Robot. The author has an hindex of 32, co-authored 280 publications receiving 4855 citations. Previous affiliations of Alexandre Bernardino include Sant'Anna School of Advanced Studies & University of Lisbon.
Topics: Humanoid robot, Robot, iCub, Pose, Affordance


Papers
More filters
Journal ArticleDOI
TL;DR: The iCub is described, which was designed to support collaborative research in cognitive development through autonomous exploration and social interaction and which has attracted a growing community of users and developers.

549 citations

Journal ArticleDOI
TL;DR: This work presents a general model for learning object affordances using Bayesian networks integrated within a general developmental architecture for social robots and demonstrates successful learning in the real world by having an humanoid robot interacting with objects.
Abstract: Affordances encode relationships between actions, objects, and effects. They play an important role on basic cognitive capabilities such as prediction and planning. We address the problem of learning affordances through the interaction of a robot with the environment, a key step to understand the world properties and develop social skills. We present a general model for learning object affordances using Bayesian networks integrated within a general developmental architecture for social robots. Since learning is based on a probabilistic model, the approach is able to deal with uncertainty, redundancy, and irrelevant information. We demonstrate successful learning in the real world by having an humanoid robot interacting with objects. We illustrate the benefits of the acquired knowledge in imitation games.

385 citations

Proceedings ArticleDOI
01 Dec 2013
TL;DR: A unified approach to bilinear factorization and nuclear norm regularization is proposed, that inherits the benefits of both and proposes a new optimization algorithm and a "rank continuation'' strategy that outperform state-of-the-art approaches for Robust PCA, Structure from Motion and Photometric Stereo with outliers and missing data.
Abstract: Low rank models have been widely used for the representation of shape, appearance or motion in computer vision problems. Traditional approaches to fit low rank models make use of an explicit bilinear factorization. These approaches benefit from fast numerical methods for optimization and easy kernelization. However, they suffer from serious local minima problems depending on the loss function and the amount/type of missing data. Recently, these low-rank models have alternatively been formulated as convex problems using the nuclear norm regularizer, unlike factorization methods, their numerical solvers are slow and it is unclear how to kernelize them or to impose a rank a priori. This paper proposes a unified approach to bilinear factorization and nuclear norm regularization, that inherits the benefits of both. We analyze the conditions under which these approaches are equivalent. Moreover, based on this analysis, we propose a new optimization algorithm and a "rank continuation'' strategy that outperform state-of-the-art approaches for Robust PCA, Structure from Motion and Photometric Stereo with outliers and missing data.

204 citations

Proceedings ArticleDOI
15 May 2006
TL;DR: This paper describes the design of a robot head, developed in the framework of the RobotCub project, which is the most complete humanoid robot currently being designed, in terms of kinematic complexity.
Abstract: This paper describes the design of a robot head, developed in the framework of the RobotCub project. This project goals consists on the design and construction of a humanoid robotic platform, the iCub, for studying human cognition. The final platform would be approximately 90 cm tall, with 23 kg and with a total number of 53 degrees of freedom. For its size, the iCub is the most complete humanoid robot currently being designed, in terms of kinematic complexity. The eyes can also move, as opposed to similarly sized humanoid platforms. Specifications are made based on biological anatomical and behavioral data, as well as tasks constraints. Different concepts for the neck design (flexible, parallel and serial solutions) are analyzed and compared with respect to the specifications. The eye structure and the proprioceptive sensors are presented, together with some discussion of preliminary work on the face design

193 citations

Proceedings Article
12 Dec 2011
TL;DR: This paper formulates image categorization as a multi-label classification problem using recent advances in matrix completion and proposes two convex algorithms for matrix completion based on a Rank Minimization criterion specifically tailored to visual data, and proves its convergence properties.
Abstract: Recently, image categorization has been an active research topic due to the urgent need to retrieve and browse digital images via semantic keywords. This paper formulates image categorization as a multi-label classification problem using recent advances in matrix completion. Under this setting, classification of testing data is posed as a problem of completing unknown label entries on a data matrix that concatenates training and testing features with training labels. We propose two convex algorithms for matrix completion based on a Rank Minimization criterion specifically tailored to visual data, and prove its convergence properties. A major advantage of our approach w.r.t. standard discriminative classification methods for image categorization is its robustness to outliers, background noise and partial occlusions both in the feature and label space. Experimental validation on several datasets shows how our method outperforms state-of-the-art algorithms, while effectively capturing semantic concepts of classes.

189 citations


Cited by
More filters
Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Posted Content
TL;DR: In this paper, the authors provide a unified and comprehensive theory of structural time series models, including a detailed treatment of the Kalman filter for modeling economic and social time series, and address the special problems which the treatment of such series poses.
Abstract: In this book, Andrew Harvey sets out to provide a unified and comprehensive theory of structural time series models. Unlike the traditional ARIMA models, structural time series models consist explicitly of unobserved components, such as trends and seasonals, which have a direct interpretation. As a result the model selection methodology associated with structural models is much closer to econometric methodology. The link with econometrics is made even closer by the natural way in which the models can be extended to include explanatory variables and to cope with multivariate time series. From the technical point of view, state space models and the Kalman filter play a key role in the statistical treatment of structural time series models. The book includes a detailed treatment of the Kalman filter. This technique was originally developed in control engineering, but is becoming increasingly important in fields such as economics and operations research. This book is concerned primarily with modelling economic and social time series, and with addressing the special problems which the treatment of such series poses. The properties of the models and the methodological techniques used to select them are illustrated with various applications. These range from the modellling of trends and cycles in US macroeconomic time series to to an evaluation of the effects of seat belt legislation in the UK.

4,252 citations

01 Jan 2004
TL;DR: Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance and describes numerous important application areas such as image based rendering and digital libraries.
Abstract: From the Publisher: The accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications. Users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods. A CD-ROM with every copy of the text contains source code for programming practice, color images, and illustrative movies. Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance. Topics are discussed in substantial and increasing depth. Application surveys describe numerous important application areas such as image based rendering and digital libraries. Many important algorithms broken down and illustrated in pseudo code. Appropriate for use by engineers as a comprehensive reference to the computer vision enterprise.

3,627 citations

01 Jan 2006

3,012 citations

01 Nov 2008

2,686 citations