scispace - formally typeset
A

Amir Feder

Researcher at Technion – Israel Institute of Technology

Publications -  33
Citations -  614

Amir Feder is an academic researcher from Technion – Israel Institute of Technology. The author has contributed to research in topics: Computer science & Language model. The author has an hindex of 8, co-authored 20 publications receiving 247 citations. Previous affiliations of Amir Feder include Google & Tel Aviv University.

Papers
More filters
Posted Content

Explaining Classifiers with Causal Concept Effect (CaCE).

TL;DR: This work defines the Causal Concept Effect (CaCE) as the causal effect of a human-interpretable concept on a deep neural net's predictions, and shows that the CaCE measure can avoid errors stemming from confounding.
Journal ArticleDOI

Shared computational principles for language processing in humans and deep language models

TL;DR: This article showed that the human brain and autoregressive DLMs share three fundamental computational principles as they process the same natural narrative: (1) both are engaged in continuous next-word prediction before word onset; (2) both match their pre-onset predictions to the incoming word to calculate post-onset surprise; and (3) both rely on contextual embeddings to represent words in natural contexts.
Journal ArticleDOI

CausaLM: Causal Model Explanation Through Counterfactual Language Models

TL;DR: CausaLM is proposed, a framework for producing causal model explanations using counterfactual language representation models based on fine-tuning of deep contextualized embedding models with auxiliary adversarial tasks derived from the causal graph of the problem.
Journal ArticleDOI

The impact of DDoS and other security shocks on Bitcoin currency exchanges: evidence from Mt. Gox

TL;DR: It is found that following DDoS attacks on Mt. Gox, the number of large trades on the exchange fell sharply, and the distribution of the daily trading volume becomes less skewed (fewer big trades) and had smaller kurtosis on days followingDDoS attacks.
Posted ContentDOI

Thinking ahead: prediction in context as a keystone of language in humans and machines

TL;DR: Empirical evidence is provided that the human brain and autoregressive DLMs share two computational principles: both are engaged in continuous prediction; both represent words as a function of the previous context; and DLM’s contextual embeddings capture the neural representation of context-specific word meaning better than arbitrary or static semantic embedding.