A
Ali Farhadi
Researcher at University of Washington
Publications - 247
Citations - 87076
Ali Farhadi is an academic researcher from University of Washington. The author has contributed to research in topics: Context (language use) & Question answering. The author has an hindex of 63, co-authored 234 publications receiving 57227 citations. Previous affiliations of Ali Farhadi include University of Illinois at Urbana–Champaign & Lorestan University of Medical Sciences.
Papers
More filters
Journal ArticleDOI
Higher Order Statistics in Computer Vision
Ali Farhadi,Mehrdad Shahshahani +1 more
TL;DR: These methods make use of higher order statistics, transforms of data into the frequency domain, and characteristics of the resulting clusters to classify images and locate extraneous objects within images.
Journal ArticleDOI
Genetically engineered E. coli invade epithelial cells and transfer their genetic cargo into the cells: an approach to a gene delivery system
Maryam Zare,Ali Farhadi,Farahnaz Zare,Gholam Reza Rafiei Dehbidi,Farzaneh Zarghampoor,Mohammad Hossein Ahmadi,Abbas Behzad Behbahani +6 more
Patent
Generating a customized machine-learning model to perform tasks using artificial intelligence
Kirchhoff Alexander James Oscar Craver,Ali Farhadi,Anish Prabhu,Del Mundo Carlo Eduardo Cabanero,Tormoen Daniel Carl,Hessam Bagherinezhad,Weaver Matthew S,Maxwell Horton,Mohammad Rastegari,Karl Jr Robert Stephen,Lebrecht Sophie +10 more
TL;DR: In this article, the authors propose a method for providing, to a client system of a user, a user interface for display, which includes a first set of options for selecting an artificial intelligence (AI) task for integrating into a user application, a second set of devices on which the user wants to deploy the selected AI task, and a third set of constraints specific to the selected devices.
Posted Content
Pushing it out of the Way: Interactive Visual Navigation
TL;DR: In this article, the authors introduce the Neural Interaction Engine (NIE) to explicitly predict the change in the environment caused by the agent's actions, by modeling the changes while planning, they find that agents exhibit significant improvements in their navigational capabilities.
Proceedings Article
Learning Visual Representation from Human Interactions
TL;DR: In this article, a self-supervised representation that encodes interaction and attention cues is proposed to learn better representations compared to visual-only representations, which can be applied to a variety of target tasks such as scene classification, action recognition, depth estimation, and walkable surface estimation.