Author

# Fidaa Abed

Other affiliations: IT University, Information Technology University

Bio: Fidaa Abed is an academic researcher from Max Planck Society. The author has contributed to research in topics: Probability distribution & Logical matrix. The author has an hindex of 5, co-authored 13 publications receiving 129 citations. Previous affiliations of Fidaa Abed include IT University & Information Technology University.

##### Papers

More filters

••

TL;DR: Quality assessment using both quantitative evaluations and user studies suggests that the presented algorithm produces tone-mapped images that are visually pleasant and preserve details of the original image better than the existing methods.

Abstract: High-dynamic-range (HDR) images require tone mapping to be displayed properly on lower dynamic range devices. In this paper, a tone-mapping algorithm that uses histogram of luminance to construct a lookup table (LUT) for tone mapping is presented. Characteristics of the human visual system (HVS) are used to give more importance to visually distinguishable intensities while constructing the histogram bins. The method begins with constructing a histogram of the luminance channel, using bins that are perceived to be uniformly spaced by the HVS. Next, a refinement step is used, which removes the pixels from the bins that are indistinguishable by the HVS. Finally, the available display levels are distributed among the bins proportionate to the pixels counts thus giving due consideration to the visual contribution of each bin in the image. Quality assessment using both quantitative evaluations and user studies suggests that the presented algorithm produces tone-mapped images that are visually pleasant and preserve details of the original image better than the existing methods. Finally, implementation details of the algorithm on GPU for parallel processing are presented, which could achieve a significant gain in speed over CPU-based implementation.

73 citations

••

10 Sep 2012TL;DR: A Ramsey-type graph theorem is proved and it is demonstrated that it is possible to break the $\Omega(\frac{\log m}{\log \log m})$ barrier on the price of anarchy by using known coordination mechanisms.

Abstract: We investigate coordination mechanisms that schedule n jobs on m unrelated machines. The objective is to minimize the latest completion of all jobs, i.e., the makespan. It is known that if the mechanism is non-preemptive, the price of anarchy is Ω(logm). Both Azar, Jain, and Mirrokni (SODA 2008) and Caragiannis (SODA 2009) raised the question whether it is possible to design a coordination mechanism that has constant price of anarchy using preemption. We give a negative answer.
All deterministic coordination mechanisms, if they are symmetric and satisfy the property of independence of irrelevant alternatives, even with preemption, have the price of anarchy $\Omega(\frac{\log m}{\log \log m})$. Moreover, all randomized coordination mechanisms, if they are symmetric and unbiased, even with preemption, have similarly the price of anarchy $\Omega(\frac{\log m}{\log \log m})$.
Our lower bound complements the result of Caragiannis, whose bcoord mechanism guarantees $O(\frac{\log m}{\log \log m})$ price of anarchy. Our lower bound construction is surprisingly simple. En route we prove a Ramsey-type graph theorem, which can be of independent interest.
On the positive side, we observe that our lower bound construction critically uses the fact that the inefficiency of a job on a machine can be unbounded. If, on the other hand, the inefficiency is not unbounded, we demonstrate that it is possible to break the $\Omega(\frac{\log m}{\log \log m})$ barrier on the price of anarchy by using known coordination mechanisms.

29 citations

••

08 Sep 2014TL;DR: A preemptive policy is designed that extends Smith-rule by adding extra delays on the jobs accounting for the negative externality they impose on other players, and it is established that thisExternality policy induces a potential game and that an e-equilibrium can be found in polynomial time.

Abstract: We consider the unrelated machine scheduling game in which players control subsets of jobs. Each player’s objective is to minimize the weighted sum of completion time of her jobs, while the social cost is the sum of players’ costs. The goal is to design simple processing policies in the machines with small coordination ratio, i.e., the implied equilibria are within a small factor of the optimal schedule. We work with a weaker equilibrium concept that includes that of Nash. We first prove that if machines order jobs according to their processing time to weight ratio, a.k.a. Smith-rule, then the coordination ratio is at most 4, moreover this is best possible among nonpreemptive policies. Then we establish our main result. We design a preemptive policy, externality, that extends Smith-rule by adding extra delays on the jobs accounting for the negative externality they impose on other players. For this policy we prove that the coordination ratio is 1 + φ ≈ 2.618, and complement this result by proving that this ratio is best possible even if we allow for randomization or full information. Finally, we establish that this externality policy induces a potential game and that an e-equilibrium can be found in polynomial time. An interesting consequence of our results is that an e −local optima of R| | ∑ w j C j for the jump (a.k.a. move) neighborhood can be found in polynomial time and are within a factor of 2.618 of the optimal solution. The latter constitutes the first direct application of purely game-theoretic ideas to the analysis of a well studied local search heuristic.

29 citations

••

TL;DR: An efficient algorithm for enumerating all connected induced subgraphs of an undirected graph that integrates vertices’ attributes with subgraph enumeration and two pruning techniques that remove futile search nodes in the enumeration tree are proposed.

Abstract: Real biological and social data is increasingly being represented as graphs. Pattern-mining-based graph learning and analysis techniques report meaningful biological subnetworks that elucidate important interactions among entities. At the backbone of these algorithms is the enumeration of pattern space. We propose an efficient algorithm for enumerating all connected induced subgraphs of an undirected graph. Building on this enumeration approach, we propose an algorithm for mining all maximal cohesive subgraphs that integrates vertices’ attributes with subgraph enumeration. To efficiently mine all maximal cohesive subgraphs, we propose two pruning techniques that remove futile search nodes in the enumeration tree. Experiments on synthetic and real graphs show the effectiveness of the proposed algorithm and the pruning techniques. On enumerating all connected induced subgraphs, our algorithm is several times faster than existing approaches. On dense graphs, the proposed approach is at least an order of magnitude faster than the best existing algorithm. Experiments on protein-protein interaction network with cancer gene dysregulation profile show that the reported cohesive subnetworks are biologically interesting.

14 citations

••

01 Aug 2015

TL;DR: The main result for this problem is a quasi-PTAS, assuming the input data to be quasi-polynomially bounded integers and this factor matches the best known (quasi- polynomial time) result for (non-guillotine) two-dimensional knapsack.

Abstract: Imagine a wooden plate with a set of non-overlapping geometric objects painted on it. How many of them can a carpenter cut out using a panel saw making guillotine cuts, i.e., only moving forward through the material along a straight line until it is split into two pieces? Already fifteen years ago, Pach and Tardos investigated whether one can always cut out a constant fraction if all objects are axis-parallel rectangles. However, even for the case of axis-parallel squares this question is still open. In this paper, we answer the latter affirmatively. Our result is constructive and holds even in a more general setting where the squares have weights and the goal is to save as much weight as possible. We further show that when solving the more general question for rectangles affirmatively with only axis-parallel cuts, this would yield a combinatorial O(1)-approximation algorithm for the Maximum Independent Set of Rectangles problem, and would thus solve a long-standing open problem. In practical applications, like the mentioned carpentry and many other settings, we can usually place the items freely that we want to cut out, which gives rise to the two-dimensional guillotine knapsack problem: Given a collection of axis-parallel rectangles without presumed coordinates, our goal is to place as many of them as possible in a square-shaped knapsack respecting the constraint that the placed objects can be separated by a sequence of guillotine cuts. Our main result for this problem is a quasi-PTAS, assuming the input data to be quasi-polynomially bounded integers. This factor matches the best known (quasi-polynomial time) result for (non-guillotine) two-dimensional knapsack.

12 citations

##### Cited by

More filters

••

TL;DR: This survey considers approximation and online algorithms for several classical generalizations of bin packing problem such as geometric bin packing, vector bin packing and various other related problems.

202 citations

•

TL;DR: The quality of the local optima obtained by iterative improvement over the jump, swap, multi-exchange, and the newly defined push neighborhoods is analyzed and bounds on the number of local search steps required to find a local optimum are provided.

Abstract: This paper deals with the worst-case performance of local search algorithms for makespan minimization on parallel machines. We analyze the quality of the local optima obtained by iterative improvements over the jump, the swap, and the newly defined push neighborhood.

58 citations

••

TL;DR: An effective blind quality assessment approach for TM images is proposed through a comprehensive consideration of their characteristics, which proves that the proposed approach is superior to the state-of-the-art no-reference IQA approaches.

Abstract: Nowadays, high-dynamic-range (HDR) imaging represents a prevailing trend and attracts much attention from both academic and industrial scholars. Since HDR images cannot be properly produced on the mainstream low-dynamic-range (LDR) displays, various tone-mapping operators or postprocessing technologies have been designed to transform HDR images into LDR images for visualization on LDR displays. However, it inevitably induces artifacts and distortions due to dynamic range compression. Besides, existing tone-mapped (TM) technologies cannot effectively handle all kinds of images with diverse contents and structures, leaving to a very challenging and urgent image quality assessment (IQA) problem. To cope with this challenge, in this paper, an effective blind quality assessment approach for TM images is proposed through a comprehensive consideration of their characteristics. More specifically, to dig out sufficient information from TM images, multiple quality-sensitive features are captured to fully represent different attributes, including colorfulness, naturalness, and structure. The connection between feature space and associated subjective ratings is established via a regression model. Extensive experiments on a recently released TM image database prove that the proposed approach is superior to the state-of-the-art no-reference IQA approaches.

52 citations

••

New York University

^{1}, University of Chile^{2}, Google^{3}, Massachusetts Institute of Technology^{4}TL;DR: This work demonstrates local mechanisms that induce outcomes with social cost close to that of the socially optimal solution in the setting of a classic scheduling problem and finds that mechanisms yielding Pareto dominated outcomes may in fact enhance the overall performance of the system.

50 citations

•

TL;DR: In a system where noncooperative agents share a common resource, the price of anarchy is proposed, which is the ratio between the worst possible Nash equilibrium and the social optimum, as a measure of the effectiveness of the system.

Abstract: In a system in which noncooperative agents share a common resource, we propose the ratio between the worst possible Nash equilibrium and the social optimum as a measure of the effectiveness of the system. Deriving upper and lower bounds for this ratio in a model in which several agents share a very simple network leads to some interesting mathematics, results, and open problems.

46 citations