scispace - formally typeset
Search or ask a question
Author

Hichem Barki

Bio: Hichem Barki is an academic researcher from Qatar University. The author has contributed to research in topics: Polyhedron & Minkowski addition. The author has an hindex of 9, co-authored 25 publications receiving 239 citations. Previous affiliations of Hichem Barki include Qatar Airways & Siemens.

Papers
More filters
Journal ArticleDOI
TL;DR: The concept of contributing vertices is exploited to propose the Enhanced and Simplified Slope Diagram-based Minkowski Sum (ESSDMS) algorithm, a slope diagram-basedMinkowski sum algorithm sharing some common points with the approach proposed by Wu et al.
Abstract: Minkowski sum is an important operation. It is used in many domains such as: computer-aided design, robotics, spatial planning, mathematical morphology, and image processing. We propose a novel algorithm, named the Contributing Vertices-based Minkowski Sum (CVMS) algorithm for the computation of the Minkowski sum of convex polyhedra. The CVMS algorithm allows to easily obtain all the facets of the Minkowski sum polyhedron only by examining the contributing vertices-a concept we introduce in this work, for each input facet. We exploit the concept of contributing vertices to propose the Enhanced and Simplified Slope Diagram-based Minkowski Sum (ESSDMS) algorithm, a slope diagram-based Minkowski sum algorithm sharing some common points with the approach proposed by Wu et al. [Wu Y, Shah J, Davidson J. Improvements to algorithms for computing the Minkowski sum of 3-polytopes. Comput Aided Des. 2003; 35(13): 1181-92]. The ESSDMS algorithm does not embed input polyhedra on the unit sphere and does not need to perform stereographic projections. Moreover, the use of contributing vertices brings up more simplifications and improves the overall performance. The implementations for the mentioned algorithms are straightforward, use exact number types, produce exact results, and are based on CGAL, the Computational Geometry Algorithms Library. More examples and results of the CVMS algorithm for several convex polyhedra can be found at http://liris.cnrs.fr/hichem.barki/mksum/CVMS-convex.

46 citations

Journal ArticleDOI
TL;DR: The method is based on a few geometric and topological predicates that allow to handle all input/output cases considered as degenerate in existing solutions, such as voids, non-manifold, disconnected, and unbounded meshes, and to robustly deal with special input configurations.
Abstract: Computing Boolean operations (Booleans) of 3D polyhedra/meshes is a basic and essential task in many domains, such as computational geometry, computer-aided design, and constructive solid geometry. Besides their utility and importance, Booleans are challenging to compute when dealing with meshes, because of topological changes, geometric degeneracies, etc. Most prior art techniques either suffer from robustness issues, deal with a restricted class of input/output meshes, or provide only approximate results. We overcome these limitations and present an exact and robust approach performing on general meshes, required to be only closed and orientable. Our method is based on a few geometric and topological predicates that allow to handle all input/output cases considered as degenerate in existing solutions, such as voids, non-manifold, disconnected, and unbounded meshes, and to robustly deal with special input configurations. Our experimentation showed that our more general approach is also more robust and more efficient than Maya’s implementation (×3), CGAL’s robust Nef polyhedra (×5), and recent plane-based approaches. Finally, we also present a complete benchmark intended to validate Boolean algorithms under relevant and challenging scenarios, and we successfully ascertain both our algorithm and implementation with it.

34 citations

Journal ArticleDOI
TL;DR: It is demonstrated that a time-frequency (TF) image pattern recognition approach offers significant advantages over standard signal classification methods that use t- domain only or f-domain only features.
Abstract: This study demonstrates that a time-frequency (TF) image pattern recognition approach offers significant advantages over standard signal classification methods that use t-domain only or f-domain only features. Two approaches are considered and compared. The paper describes the significance of the standard TF approach for non-stationary signals; TF signal (TFS) features are defined by extending t-domain or f-domain features to a joint (t, f) domain resulting in e.g. TF flatness and TF flux. The performance of the extended TFS features is comparatively assessed using Receiver Operating Characteristic (ROC) analysis Area Under the Curve (AUC) measure. Experimental results confirm that the extended TFS features generally yield improved performance (up to 19%) when compared to the corresponding t-domain and f-domain features. The study also explores a second approach based on novel TF image (TFI) features that further improves TF-based classification of non-stationary signals. New TFI features are defined and extracted from the (t, f) domain; they include TF Hu invariant moments, TF Haralick features, and TF Local Binary Patterns (LBP). Using a state-of-the-art classifier, different metrics based on confusion matrix performance are compared for all extended TFS features and TFI features. Experimental results show the improved performance of TFI features over both TFS features and t-domain only or f-domain only features, for all TF representations and for all the considered performance metrics. The experiment is validated by comparing this new proposed methodology with a recent study, utilizing the same large and complex data set of EEG signals, and the same experimental setup. The resulting classification results confirm the superior performance of the proposed TFI features with accuracy improvement up to 5.52%.

29 citations

Journal ArticleDOI
TL;DR: Algorithms for safest routes and balanced routes in buildings, where an extreme event with many epicenters is occurring, are proposed and a trade-off between route length and hazard proximity is made.
Abstract: The extreme importance of emergency response in complex buildings during natural and human-induced disasters has been widely acknowledged. In particular, there is a need for efficient algorithms for finding safest evacuation routes, which would take into account the 3-D structure of buildings, their relevant semantics, and the nature and shape of hazards. In this article, we propose algorithms for safest routes and balanced routes in buildings, where an extreme event with many epicenters is occurring. In a balanced route, a trade-off between route length and hazard proximity is made. The algorithms are based on a novel approach that integrates a multiattribute decision-making technique, Dijkstra's classical algorithm and the introduced hazard proximity numbers, hazard propagation coefficient and proximity index for a route.

28 citations

Proceedings ArticleDOI
22 Apr 2013
TL;DR: This paper presents a fast point cloud simplification method that allows to preserve sharp edge points preserving and is much faster than the last proposed simplification algorithm, and still produces similar results.
Abstract: This paper presents a fast point cloud simplification method that allows to preserve sharp edge points. The method is based on the combination of both clustering and coarse-to-fine simplification approaches. It consists to firstly create a coarse cloud using a clustering algorithm. Then each point of the resulting coarse cloud is assigned a weight that quantifies its importance, and allows to classify it into a sharp point or a simple point. Finally, both kinds of points are used to refine the coarse cloud and thus create a new simplified cloud characterized by high density of points in sharp regions and low density in flat regions. Experiments show that our algorithm is much faster than the last proposed simplification algorithm [1] which deals with sharp edge points preserving, and still produces similar results.

27 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper reviews the development and dissimilarities of GIS and BIM, the existing integration methods, and investigates their potential in various applications and shows that semantic web technologies provide a promising and generalized integration solution.
Abstract: The integration of Building Information Modeling (BIM) and Geographic Information System (GIS) has been identified as a promising but challenging topic to transform information towards the generation of knowledge and intelligence. Achievement of integrating these two concepts and enabling technologies will have a significant impact on solving problems in the civil, building and infrastructure sectors. However, since GIS and BIM were originally developed for different purposes, numerous challenges are being encountered for the integration. To better understand these two different domains, this paper reviews the development and dissimilarities of GIS and BIM, the existing integration methods, and investigates their potential in various applications. This study shows that the integration methods are developed for various reasons and aim to solve different problems. The parameters influencing the choice can be summarized and named as “EEEF” criteria: effectiveness, extensibility, effort, and flexibility. Compared with other methods, semantic web technologies provide a promising and generalized integration solution. However, the biggest challenges of this method are the large efforts required at early stage and the isolated development of ontologies within one particular domain. The isolation problem also applies to other methods. Therefore, openness is the key of the success of BIM and GIS integration.

247 citations

01 Jan 2014
TL;DR: An attempt to develop a general-purpose feature extraction scheme, which can be utilized to extract features from different categories of EEG signals, which could acquire high accuracy in classification of epileptic EEG signals.
Abstract: In this paper, an effective approach for the feature extraction of raw Electroencephalogram (EEG) signals by means of one-dimensional local binary pattern (1D-LBP) was presented. For the importance of making the right decision, the proposed method was performed to be able to get better features of the EEG signals. The proposed method was consisted of two stages: feature extraction by 1D-LBP and classification by classifier algorithms with features extracted. On the classification stage, the several machine learning methods were employed to uniform and non-uniform 1D-LBP features. The proposed method was also compared with other existing techniques in the literature to find out benchmark for an epileptic data set. The implementation results showed that the proposed technique could acquire high accuracy in classification of epileptic EEG signals. Also, the present paper is an attempt to develop a general-purpose feature extraction scheme, which can be utilized to extract features from different categories of EEG signals.

187 citations

Journal ArticleDOI
TL;DR: This report provides a systematic overview of directional field synthesis for graphics applications, the challenges it poses, and the methods developed in recent years to address these challenges.
Abstract: Direction fields and vector fields play an increasingly important role in computer graphics and geometry processing. The synthesis of directional fields on surfaces, or other spatial domains, is a fundamental step in numerous applications, such as mesh generation, deformation, texture mapping, and many more. The wide range of applications resulted in definitions for many types of directional fields: from vector and tensor fields, over line and cross fields, to frame and vector-set fields. Depending on the application at hand, researchers have used various notions of objectives and constraints to synthesize such fields. These notions are defined in terms of fairness, feature alignment, symmetry, or field topology, to mention just a few. To facilitate these objectives, various representations, discretizations, and optimization strategies have been developed. These choices come with varying strengths and weaknesses. This report provides a systematic overview of directional field synthesis for graphics applications, the challenges it poses, and the methods developed in recent years to address these challenges.

131 citations

Journal ArticleDOI
TL;DR: Solid modeling deals with the representation, design, visualization, and analysis of models of 3D parts as discussed by the authors, and the current trend follows two paths: capitalizing on the concepts of features, constraints, and model parameterization, which provide a more intuitive and suitable design vocabulary than the traditional edges, faces, or Boolean operations.
Abstract: Solid modeling deals with the representation, design, visualization. and analysis of models of 3D parts. While the embodiment of solid modeling technology in contemporary commercial CAD systems is finally beginning to fulfil the old promise of providing major improvements in the productivity of the manufacturing industry, solid modeling research remains in its infancy. Recent developments focus on advanced design paradigms, topological and geometric extensions of the domain and the performance and reliability of the fundamental algorithms. The current trend follows two paths: capitalizing on the concepts of features, constraints, and model parameterization, which provide a more intuitive and suitable design vocabulary than the traditional edges, faces, or Boolean operations; and incorporating information about the tolerances, assembly relations, and mechanical properties of parts and assemblies, which provides a suitable product database for the development of analysis and planning applications. We selected the articles in the special issue carefully, choosing from among the papers presented at the 1993 ACM/IEEE Second Symposium on Solid Modeling and Applications.

130 citations

Book ChapterDOI
31 Dec 1950

126 citations