scispace - formally typeset
A

Andrea Di Blas

Researcher at Business International Corporation

Publications -  14
Citations -  429

Andrea Di Blas is an academic researcher from Business International Corporation. The author has contributed to research in topics: SIMD & Set (abstract data type). The author has an hindex of 6, co-authored 14 publications receiving 404 citations. Previous affiliations of Andrea Di Blas include University of California, Santa Cruz & Oracle Corporation.

Papers
More filters
Journal ArticleDOI

Sort vs. Hash revisited: fast join implementation on modern multi-core CPUs

TL;DR: This paper re-examines two popular join algorithms to determine if the latest computer architecture trends shift the tide that has favored hash join for many years and offers multicore implementations of hash join and sort-merge join which consistently outperform all previously reported results.
Proceedings Article

Parallel search on video cards

TL;DR: P-ary search is presented, a novel parallel search algorithm for large-scale database index operations that scales with the number of processors and outperforms traditional thread-level parallel GPU and CPU implementations.
Journal ArticleDOI

Optimizing neural networks on SIMD parallel computers

TL;DR: This work studies two parallel implementations on SIMD computers of multiple restarts Hopfield networks for solving the maximum clique problem and finds that the neural networks map well to the parallel architectures and afford substantial speedups with respect to the serial program.
Journal ArticleDOI

Alkaline hemolysis fragility is dependent on cell shape: results from a morphology tracker.

TL;DR: This work presents a novel method for the analysis of morphologic changes of live erythrocytes as a function of time and uses this method to extract information on alkaline hemolysis fragility.
Book ChapterDOI

Large-Scale GPU Search

TL;DR: P-ary search the scalable parallel search algorithm that uses single instruction multiple data architectures—graphical processing unit (GPUs) is introduced, which demonstrates how parallel memory access combined with the superior synchronization capabilities of SIMD architectures like GPUs can be leveraged to compensate for memory latency.