scispace - formally typeset
Search or ask a question
Institution

Toyota

CompanySafenwil, Switzerland
About: Toyota is a company organization based out in Safenwil, Switzerland. It is known for research contribution in the topics: Internal combustion engine & Exhaust gas. The organization has 40032 authors who have published 55003 publications receiving 735317 citations. The organization is also known as: Toyota Motor Corporation & Toyota Jidosha KK.


Papers
More filters
Proceedings ArticleDOI
29 Sep 2006
TL;DR: The Message Dispatcher is an interface between multiple safety applications and the lower-layer communication stack and has become an underlying principle in their safety message standardization process.
Abstract: This paper presents a method for efficient exchanges of Data Elements between vehicles running multiple safety applications. To date, significant efforts have been made in designing lower-layer communication protocols for VANET. Also, industry and government agencies have made progress in identifying and implementing certain vehicular safety applications. However, the specific environment of VANET-enabled safety applications lends itself to significant efficiencies in how information is coordinated within a vehicle and transmitted to neighboring vehicles. These efficiencies are instantiated in what we call the Message Dispatcher. The Message Dispatcher is an interface between multiple safety applications and the lower-layer communication stack. This Message Dispatcher concept was recently contributed to the Society of Automotive Engineers (SAE) and has become an underlying principle in their safety message standardization process. It has also been implemented in vehicle demonstrations at the Toyota Technical Center (TTC) in Ann Arbor, MI.

97 citations

Journal ArticleDOI
TL;DR: Reliabilities of joints for power semiconductor devices using a Bi-based high temperature solder, prepared by mixing of the CuAlMn particles and molten Bi to overcome the brittleness of Bi, has been studied.

97 citations

Proceedings Article
11 Apr 2019
TL;DR: This article studied the implicit bias of gradient descent when optimizing loss functions with strictly monotone tails, such as the logistic loss, over separable datasets and showed that gradient descent converges in the direction of the maximum margin separator.
Abstract: We provide a detailed study on the implicit bias of gradient descent when optimizing loss functions with strictly monotone tails, such as the logistic loss, over separable datasets. We look at two basic questions: (a) what are the conditions on the tail of the loss function under which gradient descent converges in the direction of the $L_2$ maximum-margin separator? (b) how does the rate of margin convergence depend on the tail of the loss function and the choice of the step size? We show that for a large family of super-polynomial tailed losses, gradient descent iterates on linear networks of any depth converge in the direction of $L_2$ maximum-margin solution, while this does not hold for losses with heavier tails. Within this family, for simple linear models we show that the optimal rates with fixed step size is indeed obtained for the commonly used exponentially tailed losses such as logistic loss. However, with a fixed step size the optimal convergence rate is extremely slow as $1/\log(t)$, as also proved in Soudry et al (2018). For linear models with exponential loss, we further prove that the convergence rate could be improved to $\log (t) /\sqrt{t}$ by using aggressive step sizes that compensates for the rapidly vanishing gradients. Numerical results suggest this method might be useful for deep networks.

97 citations

Journal ArticleDOI
TL;DR: In this paper, the authors derived an exact and simple formula of the oscillation probability P(νe→νμ) in constant matter by using a new method, and showed that the matter effects can be separated from the pure CP violation effects.

96 citations

Patent
19 Feb 2016
TL;DR: In this article, a deep convolutional neural network (DCNN) was proposed to determine a class of at least a portion of the image data based on the first likelihood score and the second likelihood score.
Abstract: By way of example, the technology disclosed by this document receives image data; extracts a depth image and a color image from the image data; creates a mask image by segmenting the depth image; determines a first likelihood score from the depth image and the mask image using a layered classifier; determines a second likelihood score from the color image and the mask image using a deep convolutional neural network; and determines a class of at least a portion of the image data based on the first likelihood score and the second likelihood score. Further, the technology can pre-filter the mask image using the layered classifier and then use the pre-filtered mask image and the color image to calculate a second likelihood score using the deep convolutional neural network to speed up processing.

96 citations


Authors

Showing all 40045 results

NameH-indexPapersCitations
Derek R. Lovley16858295315
Edward H. Sargent14084480586
Shanhui Fan139129282487
Susumu Kitagawa12580969594
John B. Buse117521101807
Meilin Liu11782752603
Zhongfan Liu11574349364
Wolfram Burgard11172864856
Douglas R. MacFarlane11086454236
John J. Leonard10967646651
Ryoji Noyori10562747578
Stephen J. Pearton104191358669
Lajos Hanzo101204054380
Masashi Kawasaki9885647863
Andrzej Cichocki9795241471
Network Information
Related Institutions (5)
Tokyo Institute of Technology
101.6K papers, 2.3M citations

89% related

Eindhoven University of Technology
52.9K papers, 1.5M citations

87% related

Osaka University
185.6K papers, 5.1M citations

86% related

KAIST
77.6K papers, 1.8M citations

86% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20231
202232
2021942
20201,846
20192,981
20182,541