scispace - formally typeset
X

Xavier Ochoa

Researcher at New York University

Publications -  126
Citations -  2911

Xavier Ochoa is an academic researcher from New York University. The author has contributed to research in topics: Learning analytics & Learning object. The author has an hindex of 24, co-authored 121 publications receiving 2523 citations. Previous affiliations of Xavier Ochoa include Escuela Superior Politecnica del Litoral & Open University of Catalonia.

Papers
More filters
Journal ArticleDOI

Context-Aware Recommender Systems for Learning: A Survey and Future Challenges

TL;DR: In this article, the authors present a context framework that identifies relevant context dimensions for TEL applications and present an analysis of existing TEL recommender systems along these dimensions, based on their survey results, they outline topics on which further research is needed.

Quantitative Analysis of Learning Object Repositories

TL;DR: In this article, the authors conducted a detailed quantitative study of the publication process of learning objects in repositories and found that the amount of learning object is distributed among repositories according to a power law and the repositories mostly grow linearly.
Journal ArticleDOI

Relevance Ranking Metrics for Learning Objects

TL;DR: An exploratory evaluation of the metrics shows that even the simplest ones provide statistically significant improvement in the ranking order over the most common algorithmic relevance metric.
Journal ArticleDOI

Quantitative Analysis of Learning Object Repositories

TL;DR: The main findings are that the amount of learning objects is distributed among repositories according to a power law, the repositories mostly grow linearly, and the amountof learning objects published by each contributor follows heavy-tailed distributions.
Journal ArticleDOI

Automatic evaluation of metadata quality in digital repositories

TL;DR: A set of scalable quality metrics for metadata based on the Bruce & Hillman framework for metadata quality control is presented and it is found that several metrics, especially Text Information Content, correlate well with human evaluation and that the average of all the metrics are roughly as effective as people to flag low-quality instances.