R
Ramez Elmasri
Researcher at University of Texas at Arlington
Publications - 202
Citations - 10375
Ramez Elmasri is an academic researcher from University of Texas at Arlington. The author has contributed to research in topics: Database design & Temporal database. The author has an hindex of 36, co-authored 201 publications receiving 10157 citations. Previous affiliations of Ramez Elmasri include Honeywell & Stanford University.
Papers
More filters
A Classification and Modeling of the Quality of Contextual Information in Smart Spaces.
TL;DR: In this article, the authors proposed a pragmatic context classification and a generalized context modeling scheme based on sensor fusion techniques to improve the quality of given contextual information by reducing uncertainty, and showed an example within the applied scenario as an evidential network.
Proceedings ArticleDOI
Investigation of impact factors for various performances of passive UHF RFID system
TL;DR: According to the empirical results, compared with non-interfered backscattering signal strength in an anechoic chamber, tag-to-tag interference affects the reader received signal strength, such as 5.8dB of excess decrease and 2.5dB of increase, depending on the distance between two tags.
Journal ArticleDOI
BusSEngine: a business search engine
Kamal Taha,Ramez Elmasri +1 more
TL;DR: Two XML search engines are proposed: an XML Keyword-Based search engine for answering business’ customers called BusSEngine-K, and an XML loosely Structured-Based Search Engine for answeringbusiness’ employees called Bus SEngine -L, built on top of XQuery search engine.
Journal ArticleDOI
A Survey on Trajectory Data Warehouse
TL;DR: A framework that aims to provide the requirements for building the Trajectory Data Warehouse (TDW) is proposed and discussed, which discusses different applications using the TDW and how these applications utilize theTDW.
Proceedings ArticleDOI
Weakly-supervised hand part segmentation from depth images
TL;DR: In this paper, a data-driven method for hand part segmentation on depth maps without any need for extra effort to obtain segmentation labels is proposed, which uses the labels already provided by public datasets in terms of major 3D hand joint locations to learn to estimate the hand shape and pose given a depth map.