scispace - formally typeset
M

Miguel Bugalho

Researcher at INESC-ID

Publications -  18
Citations -  1013

Miguel Bugalho is an academic researcher from INESC-ID. The author has contributed to research in topics: Audio signal processing & Audio mining. The author has an hindex of 10, co-authored 18 publications receiving 895 citations. Previous affiliations of Miguel Bugalho include Instituto Superior Técnico & Technical University of Lisbon.

Papers
More filters
Journal ArticleDOI

Global optimal eBURST analysis of multilocus typing data using a graphic matroid approach

TL;DR: GoeBURST is a globally optimized implementation of the eBURST algorithm, that identifies alternative patterns of descent for several bacterial species and can be applied to any multilocus typing data based on the number of differences between numeric profiles.
Journal ArticleDOI

Temporal Video Segmentation to Scenes Using High-Level Audiovisual Features

TL;DR: Improved performance of the proposed approach in comparison to other unimodal and multimodal techniques of the relevant literature is demonstrated and the contribution of high-level audiovisual features toward improved video segmentation to scenes is highlighted.
Proceedings ArticleDOI

Non-speech audio event detection

TL;DR: This paper describes experiments with SVM and HMM-based classifiers, using a 290-hour corpus of sound effects, and reports promising results, despite the difficulties posed by the mixtures of audio events that characterize real sounds.

The MediaMill TRECVID 2009 Semantic Video Search Engine

TL;DR: For example, MediaMill as mentioned in this paper used multiple color descriptors, codebooks with soft-assignment, and kernel-based supervised learning to improve the performance of their bag-of-words system.
Proceedings Article

Detecting Audio Events for Semantic Video Search

TL;DR: The experiments with SVM classifiers, and different features, using a 290-hour corpus of sound effects, which allowed us to build detectors for almost 50 semantic concepts, showed that the task is much harder in real-life videos, which so often include overlapping audio events.