Institution
Mitsubishi Electric
Company•Ratingen, Germany•
About: Mitsubishi Electric is a company organization based out in Ratingen, Germany. It is known for research contribution in the topics: Signal & Voltage. The organization has 23024 authors who have published 27591 publications receiving 255671 citations. The organization is also known as: Mitsubishi Electric Corporation & Mitsubishi Denki K.K..
Topics: Signal, Voltage, Layer (electronics), Terminal (electronics), Electrode
Papers published on a yearly basis
Papers
More filters
••
19 Mar 1997
TL;DR: A system for simulating arthroscopic knee surgery that is based on volumetric object models derived from 3D Magnetic Resonance Imaging is presented and feedback is provided to the user via real-time volume rendering and force feedback for haptic exploration.
Abstract: A system for simulating arthroscopic knee surgery that is based on volumetric object models derived from 3D Magnetic Resonance Imaging is presented. Feedback is provided to the user via real-time volume rendering and force feedback for haptic exploration. The system is the result of a unique collaboration between an industrial research laboratory, two major universities, and a leading research hospital. In this paper, components of the system are detailed and the current state of the integrated system is presented. Issues related to future research and plans for expanding the current system are discussed.
175 citations
•
TL;DR: In this article, a joint Connectionist Temporal Classification (CTC) and attention-based encoder-decoder network is proposed to learn to listen and write characters with a joint connectionist temporal classification and attention based encoderdecoder.
Abstract: We present a state-of-the-art end-to-end Automatic Speech Recognition (ASR) model. We learn to listen and write characters with a joint Connectionist Temporal Classification (CTC) and attention-based encoder-decoder network. The encoder is a deep Convolutional Neural Network (CNN) based on the VGG network. The CTC network sits on top of the encoder and is jointly trained with the attention-based decoder. During the beam search process, we combine the CTC predictions, the attention-based decoder predictions and a separately trained LSTM language model. We achieve a 5-10\% error reduction compared to prior systems on spontaneous Japanese and Chinese speech, and our end-to-end model beats out traditional hybrid ASR systems.
174 citations
•
17 May 1996TL;DR: In this paper, the probability of the sequence of the parts of speech being correct is used to correct improper use of troublesome words, especially those identical sounding words which are spelled differently, in a grammar checking system in which a sentence is first tagged as to pa of speech.
Abstract: In a grammar checking system in which a sentence is first tagged as to pa of speech, the probability of the sequence of the parts of speech being correct is utilized to correct improper use of troublesome words, especially those identical sounding words which are spelled differently The system corrects word usage based not on the probability of the entire sentence being correct but rather on the probability of the sequence of the parts of speech being correct As part of the subject invention, the parts of speech sequence probability is utilized in part of speech sequence verification, underlying spelling recovery, auxiliary verb correction, determiner correction, and in a context-sensitive dictionary lookup
173 citations
••
06 Sep 2015TL;DR: Several integration architectures are proposed and tested, including a pipeline architecture of L STM-based SE and ASR with sequence training, an alternating estimation architecture, and a multi-task hybrid LSTM network architecture.
Abstract: Long Short-Term Memory (LSTM) recurrent neural network has
proven effective in modeling speech and has achieved outstanding
performance in both speech enhancement (SE) and automatic
speech recognition (ASR). To further improve the performance of
noise-robust speech recognition, a combination of speech enhancement
and recognition was shown to be promising in earlier work.
This paper aims to explore options for consistent integration of SE
and ASR using LSTM networks. Since SE and ASR have different
objective criteria, it is not clear what kind of integration would
finally lead to the best word error rate for noise-robust ASR tasks.
In this work, several integration architectures are proposed and
tested, including: (1) a pipeline architecture of LSTM-based SE and
ASR with sequence training, (2) an alternating estimation architecture,
and (3) a multi-task hybrid LSTM network architecture.
The proposed models were evaluated on the 2nd CHiME speech
separation and recognition challenge task, and show significant
improvements relative to prior results.
173 citations
••
TL;DR: In this article, the authors describe the system outline and the operating results of a new type of 20MVA Static VAR Generator (SVG), which is already in operation in electric power field since January of 1980.
Abstract: This paper describes the system outline and the operating results of a new type of 20MVA Static VAR Generator (SVG), which is already in operation in electric power field since January of 1980. This SVG consists of force-commutated inverters of the voltage source and can be operated in both, inductive and capacitive modes, by simple control of the output voltage of the inverter. Special emphasis is placed on the system outline, electrical designing features and the operating results which coincide with the theoretical analysis.
172 citations
Authors
Showing all 23025 results
Name | H-index | Papers | Citations |
---|---|---|---|
Ron Kikinis | 126 | 684 | 63398 |
William T. Freeman | 113 | 432 | 69007 |
Takashi Saito | 112 | 1041 | 52937 |
Andreas F. Molisch | 96 | 777 | 47530 |
Markus Gross | 91 | 588 | 32881 |
Michael Wooldridge | 87 | 543 | 50675 |
Ramesh Raskar | 86 | 670 | 30675 |
Dan Roth | 85 | 523 | 28166 |
Joseph Katz | 81 | 691 | 27793 |
James S. Harris | 80 | 1152 | 28467 |
Michael Mitzenmacher | 79 | 422 | 36300 |
Hanspeter Pfister | 79 | 466 | 23935 |
Dustin Anderson | 78 | 607 | 28052 |
Takashi Hashimoto | 73 | 983 | 24644 |
Masaaki Tanaka | 71 | 860 | 22443 |