Institution
Toyota
Company•Safenwil, Switzerland•
About: Toyota is a company organization based out in Safenwil, Switzerland. It is known for research contribution in the topics: Internal combustion engine & Exhaust gas. The organization has 40032 authors who have published 55003 publications receiving 735317 citations. The organization is also known as: Toyota Motor Corporation & Toyota Jidosha KK.
Papers published on a yearly basis
Papers
More filters
••
29 Sep 2006TL;DR: The Message Dispatcher is an interface between multiple safety applications and the lower-layer communication stack and has become an underlying principle in their safety message standardization process.
Abstract: This paper presents a method for efficient exchanges of Data Elements between vehicles running multiple safety applications. To date, significant efforts have been made in designing lower-layer communication protocols for VANET. Also, industry and government agencies have made progress in identifying and implementing certain vehicular safety applications. However, the specific environment of VANET-enabled safety applications lends itself to significant efficiencies in how information is coordinated within a vehicle and transmitted to neighboring vehicles. These efficiencies are instantiated in what we call the Message Dispatcher. The Message Dispatcher is an interface between multiple safety applications and the lower-layer communication stack. This Message Dispatcher concept was recently contributed to the Society of Automotive Engineers (SAE) and has become an underlying principle in their safety message standardization process. It has also been implemented in vehicle demonstrations at the Toyota Technical Center (TTC) in Ann Arbor, MI.
97 citations
••
TL;DR: Reliabilities of joints for power semiconductor devices using a Bi-based high temperature solder, prepared by mixing of the CuAlMn particles and molten Bi to overcome the brittleness of Bi, has been studied.
97 citations
•
11 Apr 2019TL;DR: This article studied the implicit bias of gradient descent when optimizing loss functions with strictly monotone tails, such as the logistic loss, over separable datasets and showed that gradient descent converges in the direction of the maximum margin separator.
Abstract: We provide a detailed study on the implicit bias of gradient descent when optimizing loss functions with strictly monotone tails, such as the logistic loss, over separable datasets. We look at two basic questions: (a) what are the conditions on the tail of the loss function under which gradient descent converges in the direction of the $L_2$ maximum-margin separator? (b) how does the rate of margin convergence depend on the tail of the loss function and the choice of the step size? We show that for a large family of super-polynomial tailed losses, gradient descent iterates on linear networks of any depth converge in the direction of $L_2$ maximum-margin solution, while this does not hold for losses with heavier tails. Within this family, for simple linear models we show that the optimal rates with fixed step size is indeed obtained for the commonly used exponentially tailed losses such as logistic loss. However, with a fixed step size the optimal convergence rate is extremely slow as $1/\log(t)$, as also proved in Soudry et al (2018). For linear models with exponential loss, we further prove that the convergence rate could be improved to $\log (t) /\sqrt{t}$ by using aggressive step sizes that compensates for the rapidly vanishing gradients. Numerical results suggest this method might be useful for deep networks.
97 citations
••
TL;DR: In this paper, the authors derived an exact and simple formula of the oscillation probability P(νe→νμ) in constant matter by using a new method, and showed that the matter effects can be separated from the pure CP violation effects.
96 citations
•
19 Feb 2016TL;DR: In this article, a deep convolutional neural network (DCNN) was proposed to determine a class of at least a portion of the image data based on the first likelihood score and the second likelihood score.
Abstract: By way of example, the technology disclosed by this document receives image data; extracts a depth image and a color image from the image data; creates a mask image by segmenting the depth image; determines a first likelihood score from the depth image and the mask image using a layered classifier; determines a second likelihood score from the color image and the mask image using a deep convolutional neural network; and determines a class of at least a portion of the image data based on the first likelihood score and the second likelihood score. Further, the technology can pre-filter the mask image using the layered classifier and then use the pre-filtered mask image and the color image to calculate a second likelihood score using the deep convolutional neural network to speed up processing.
96 citations
Authors
Showing all 40045 results
Name | H-index | Papers | Citations |
---|---|---|---|
Derek R. Lovley | 168 | 582 | 95315 |
Edward H. Sargent | 140 | 844 | 80586 |
Shanhui Fan | 139 | 1292 | 82487 |
Susumu Kitagawa | 125 | 809 | 69594 |
John B. Buse | 117 | 521 | 101807 |
Meilin Liu | 117 | 827 | 52603 |
Zhongfan Liu | 115 | 743 | 49364 |
Wolfram Burgard | 111 | 728 | 64856 |
Douglas R. MacFarlane | 110 | 864 | 54236 |
John J. Leonard | 109 | 676 | 46651 |
Ryoji Noyori | 105 | 627 | 47578 |
Stephen J. Pearton | 104 | 1913 | 58669 |
Lajos Hanzo | 101 | 2040 | 54380 |
Masashi Kawasaki | 98 | 856 | 47863 |
Andrzej Cichocki | 97 | 952 | 41471 |