scispace - formally typeset
Search or ask a question
Institution

Omron

CompanyKyoto, Japan
About: Omron is a company organization based out in Kyoto, Japan. It is known for research contribution in the topics: Signal & Image processing. The organization has 5963 authors who have published 6612 publications receiving 75382 citations. The organization is also known as: OMRON Corporation & Omuron Kabushiki-gaisha.


Papers
More filters
Proceedings ArticleDOI
23 Jun 2013
TL;DR: This work considers both foreground and background cues in a different way and ranks the similarity of the image elements with foreground cues or background cues via graph-based manifold ranking, defined based on their relevances to the given seeds or queries.
Abstract: Most existing bottom-up methods measure the foreground saliency of a pixel or region based on its contrast within a local context or the entire image, whereas a few methods focus on segmenting out background regions and thereby salient objects Instead of considering the contrast between the salient objects and their surrounding regions, we consider both foreground and background cues in a different way We rank the similarity of the image elements (pixels or regions) with foreground cues or background cues via graph-based manifold ranking The saliency of the image elements is defined based on their relevances to the given seeds or queries We represent the image as a close-loop graph with super pixels as nodes These nodes are ranked based on the similarity to background and foreground queries, based on affinity matrices Saliency detection is carried out in a two-stage scheme to extract background regions and foreground salient objects efficiently Experimental results on two large benchmark databases demonstrate the proposed method performs well when against the state-of-the-art methods in terms of accuracy and speed We also create a more difficult benchmark database containing 5,172 images to test the proposed saliency model and make this database publicly available with this paper for further studies in the saliency field

2,278 citations

Proceedings ArticleDOI
20 May 2019
TL;DR: In this paper, a federated learning (FL) protocol for heterogeneous clients in a mobile edge computing (MEC) network is proposed. But the authors consider the problem of data aggregation in the overall training process and propose a new protocol to solve it.
Abstract: We envision a mobile edge computing (MEC) framework for machine learning (ML) technologies, which leverages distributed client data and computation resources for training high-performance ML models while preserving client privacy. Toward this future goal, this work aims to extend Federated Learning (FL), a decentralized learning framework that enables privacy-preserving training of models, to work with heterogeneous clients in a practical cellular network. The FL protocol iteratively asks random clients to download a trainable model from a server, update it with own data, and upload the updated model to the server, while asking the server to aggregate multiple client updates to further improve the model. While clients in this protocol are free from disclosing own private data, the overall training process can become inefficient when some clients are with limited computational resources (i.e., requiring longer update time) or under poor wireless channel conditions (longer upload time). Our new FL protocol, which we refer to as FedCS, mitigates this problem and performs FL efficiently while actively managing clients based on their resource conditions. Specifically, FedCS solves a client selection problem with resource constraints, which allows the server to aggregate as many client updates as possible and to accelerate performance improvement in ML models. We conducted an experimental evaluation using publicly-available large-scale image datasets to train deep neural networks on MEC environment simulations. The experimental results show that FedCS is able to complete its training process in a significantly shorter time compared to the original FL protocol.

1,044 citations

Proceedings ArticleDOI
01 Oct 2017
TL;DR: Amulet is presented, a generic aggregating multi-level convolutional feature framework for salient object detection that provides accurate salient object labeling and performs favorably against state-of-the-art approaches in terms of near all compared evaluation metrics.
Abstract: Fully convolutional neural networks (FCNs) have shown outstanding performance in many dense labeling problems. One key pillar of these successes is mining relevant information from features in convolutional layers. However, how to better aggregate multi-level convolutional feature maps for salient object detection is underexplored. In this work, we present Amulet, a generic aggregating multi-level convolutional feature framework for salient object detection. Our framework first integrates multi-level feature maps into multiple resolutions, which simultaneously incorporate coarse semantics and fine details. Then it adaptively learns to combine these feature maps at each resolution and predict saliency maps with the combined features. Finally, the predicted results are efficiently fused to generate the final saliency map. In addition, to achieve accurate boundary inference and semantic enhancement, edge-aware feature maps in low-level layers and the predicted results of low resolution features are recursively embedded into the learning framework. By aggregating multi-level convolutional features in this efficient and flexible manner, the proposed saliency model provides accurate salient object labeling. Comprehensive experiments demonstrate that our method performs favorably against state-of-the-art approaches in terms of near all compared evaluation metrics.

759 citations

Proceedings ArticleDOI
01 Dec 2013
TL;DR: A visual saliency detection algorithm from the perspective of reconstruction errors that applies the Bayes formula to integrate saliency measures based on dense and sparse reconstruction errors and refined by an object-biased Gaussian model is proposed.
Abstract: In this paper, we propose a visual saliency detection algorithm from the perspective of reconstruction errors. The image boundaries are first extracted via super pixels as likely cues for background templates, from which dense and sparse appearance models are constructed. For each image region, we first compute dense and sparse reconstruction errors. Second, the reconstruction errors are propagated based on the contexts obtained from K-means clustering. Third, pixel-level saliency is computed by an integration of multi-scale reconstruction errors and refined by an object-biased Gaussian model. We apply the Bayes formula to integrate saliency measures based on dense and sparse reconstruction errors. Experimental results show that the proposed algorithm performs favorably against seventeen state-of-the-art methods in terms of precision and recall. In addition, the proposed algorithm is demonstrated to be more effective in highlighting salient objects uniformly and robust to background noise.

725 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: This method presents two interesting insights: first, local features learned by a supervised scheme can effectively capture local contrast, texture and shape information for saliency detection and second, the complex relationship between different global saliency cues can be captured by deep networks and exploited principally rather than heuristically.
Abstract: This paper presents a saliency detection algorithm by integrating both local estimation and global search. In the local estimation stage, we detect local saliency by using a deep neural network (DNN-L) which learns local patch features to determine the saliency value of each pixel. The estimated local saliency maps are further refined by exploring the high level object concepts. In the global search stage, the local saliency map together with global contrast and geometric information are used as global features to describe a set of object candidate regions. Another deep neural network (DNN-G) is trained to predict the saliency score of each object region based on the global features. The final saliency map is generated by a weighted sum of salient object regions. Our method presents two interesting insights. First, local features learned by a supervised scheme can effectively capture local contrast, texture and shape information for saliency detection. Second, the complex relationship between different global saliency cues can be captured by deep networks and exploited principally rather than heuristically. Quantitative and qualitative experiments on several benchmark data sets demonstrate that our algorithm performs favorably against the state-of-the-art methods.

690 citations


Authors

Showing all 5966 results

NameH-indexPapersCitations
Shree K. Nayar11338445139
Junichiro Hayano452438571
Ryo Yamamoto372387281
Akira Nakajima361654885
Kei Okada364596719
Jorge Lobo361374223
Ken-ichi Yamakoshi342053804
Shihong Lao341155751
Hiroshi Okabe321143313
Hisamatsu Nakano281743400
Satoshi Fujii261412331
Teruyuki Hayashi261681918
Yoshiaki Watanabe262482215
Kimihiko Imamura261272241
Shigeru Aoyama241341930
Network Information
Related Institutions (5)
Hitachi
101.4K papers, 1.4M citations

85% related

Samsung
163.6K papers, 2M citations

84% related

Tokyo Institute of Technology
101.6K papers, 2.3M citations

82% related

University of Tsukuba
79.4K papers, 1.9M citations

82% related

Keio University
71.3K papers, 1.5M citations

82% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20232
20222
202174
2020198
2019489
2018423