Y
Yangyang Shi
Researcher at Facebook
Publications - 162
Citations - 1820
Yangyang Shi is an academic researcher from Facebook. The author has contributed to research in topics: Computer science & Medicine. The author has an hindex of 14, co-authored 61 publications receiving 1157 citations. Previous affiliations of Yangyang Shi include Beijing University of Civil Engineering and Architecture & Microsoft.
Papers
More filters
Proceedings ArticleDOI
Spoken language understanding using long short-term memory neural networks
TL;DR: This paper investigates using long short-term memory (LSTM) neural networks, which contain input, output and forgetting gates and are more advanced than simple RNN, for the word labeling task and proposes a regression model on top of the LSTM un-normalized scores to explicitly model output-label dependence.
Proceedings ArticleDOI
Recurrent neural networks for language understanding.
TL;DR: This paper modify the architecture to perform Language Understanding, and advance the state-of-the-art for the widely used ATIS dataset.
Journal ArticleDOI
Search for the chiral magnetic effect with isobar collisions at sNN=200 GeV by the STAR Collaboration at the BNL Relativistic Heavy Ion Collider
Mustafa Muzameal Suleman Abdallah,Bassam Aboona,Jaroslav Adam,Leszek Adamczyk,M. R. Adams,J. K. Adkins,G. Agakishiev,I. Aggarwal,M. Aggarwal,Zubayer Ahammed,I. G. Alekseev,J. Anderson,A. Aparin,E. C. Aschenauer,Muhammad Usman Ashraf,F. G. Atetalla,A. Attri,G. S. Averichev,Vipul Bairathi,W. Baker,J. Ball Cap,K. N. Barish,Arabinda Behera,R. Bellwied,P.R. Bhagat,Anju Bhasin,Jaroslav Bielcik,Jana Bielcikova,I. G. Bordyuzhin,James Brandenburg,A. V. Brandin,I. Bunzarov,Xu Cai,H. Caines,M. Calderon De La Barca Sanchez,D. Cebra,Irakli Chakaberia,P. Chaloupka,B. K. Chan,F-H. Chang,C. Y. Chang,N. Chankova-Bunzarova,A. Chatterjee,Sukalyan Chattopadhyay,Di Chen,Ju Chen,Xi Chen,Zhi Jiang Chen,J. Cheng,Max Chevalier,Subikash Choudhury,W. Christie,Xiaotong Chu,H. J. Crawford,Mate Csanad,M. S. Daugherity,T. G. Dedovich,I. M. Deppner,A. A. Derevschikov,Avnee Dhamija,L. A. Di Carlo,L. Didenko,Prachi Dixit,X. Dong,J. L. Drachenberg,E Duckworth,J. C. Dunlop,N. Elsey,J. Engelage,G. Eppley,S. Esumi,Olga Evdokimov,A. Ewigleben,O. Eyser,R. Fatemi,F. M. Fawzi,S. Fazio,P. Federic,J. Fedorisin,C. J. Feng,Y. Feng,P. Filip,E. Finch,Y. Fisyak,Audrey Francisco,C.S. Fu,L. Fulek,C. A. Gagliardi,T. Galatyuk,Frank Jm Geurts,N. Ghimire,A. Gibson,K. Gopal,X. Gou,D. Grosnick,W. Guryn,A. I. Hamad,A. Hamed,Y. A. Han,S. Harabasz,M. D. Harasty,J. W. Harris,Holly Harrison,S. M. He,W He,X. H. He,Y He,S. Heppelmann,N. Herrmann,E. Hoffman,Lukas Holub,Y. H. Hu,H. Z. Huang,S. Huang,X. Huang,Y. Huang,T. J. Humanic,G. Igo,D. Isenhower,W. W. Jacobs,Chitrasen Jena,Alexander Jentsch,Yuanjing Ji,J. Jia,K. Jiang,Xinyue Ju,E. G. Judd,S. Kabana,M. L. Kabir,Skipper Kagamaster,D. Kalinkin,K. Kang,D. Kapukchyan,K. Kauder,H. W. Ke,D. Keane,A. Kechechyan,M. H. Kelsey,Y. Khyzhniak,D. P. Kikola,C. O. Kim,B. Kimelman,D. Kincses,Ivan Kisel,Alexander Kiselev,Anders Garritt Knospe,H. Ko,L. Kotchenda,L. K. Kosarzewski,Lukas Kramarik,P. Kravtsov,Lokesh Kumar,S. Kumar,R. Kunnawalkam Elayavalli,Joseph Kwasizur,Roy A. Lacey,Si Lan,J. M. Landgraf,J. Lauret,A. Lebedev,R. Lednicky,Ju Lee,Yu Hang Leung,Chao-Jun Li,William Li,X. H. Li,Y. Li,X. Liang,Y Liang,Robert Licenik,T. Lin,Y. Lin,M. A. Lisa,F. Liu,H. Liu,Pi Liu,T. Liu,Xu Liu,Yi Liu,Z. Liu,T. Ljubicic,W. J. Llope,R. S. Longacre,Emily Loyd,N. S. Lukow,L. Ma,Rongrong Ma,Y. Mao,Niseem Magdy,D. Mallick,S. Margetis,Christina Markert,H. S. Matis,Joel Anthony Mazer,N. G. Minaev,Saskia Mioduszewski,Bedangadas Mohanty,M. M. Mondal,Isaac Mooney,D. A. Morozov,A. Mukherjee,M. I. Nagy,Jae-Do Nam,Md. Nasim,Kishora Nayak,D. Neff,J. M. Nelson,Daniel Nemes,Maowu Nie,G. Nigmatkulov,Takafumi Niida,R. Nishitani,L. V. Nogach,T. Nonaka,A.S. Nunes,G. Odyniec,A. Ogawa,S. E. Oh,V. A. Okorokov,B. S. Page,R. Pak,Jie Pan,A. Pandav,Ashutosh Pandey,Y. Panebratsev,P. Parfenov,B. Pawlik,Diana Pawlowska,C. Perkins,Lawrence Pinsky,Roland Laszlo Pinter,J. Pluta,B. R. Pokhrel,G. Ponimatkin,J. Porter,M. Posik,V. I. Prozorova,N. K. Pruthi,Mariusz Przybycien,Jorn Henning Putschke,H. Qiu,A. Quintero,C. Racz,Sooraj Krishnan Radhakrishnan,N. Raha,R. L. Ray,Rolf K. Reed,H. G. Ritter,M. Robotkova,O. V. Rogachevskiy,J. L. Romero,Dallas Roy,L. Ruan,J. Rusnak,A. Sahoo,Nihar Sahoo,Hiroyuki Sako,Sevil Salur,J. Sandweiss,Shuichi Sato,W. B. Schmidke,N. Schmitz,Benjamin Schweid,F. Seck,Janet Elizabeth Seger,M. Sergeeva,R. Seto,P. Seyboth,Nilay Shah,E. Shahaliev,P. V. Shanmuganathan,M. Shao,Tianhao Shao,Ashik Ikbal Sheikh,D.Y. Shen,S. S. Shi,Yangyang Shi,Q. Y. Shou,E. P. Sichtermann,Rafal Sikora,Miroslav Simko,J. B. Singh,Subhash Singha,M. J. Skoby,Nikolai Smirnov,Yannic Söhngen,W. Solyst,P. Sorensen,H. M. Spinka,B. K. Srivastava,T. D. S. Stanislaus,Maria Stefaniak,David Stewart,M. Strikhanov,B. Stringfellow,Alexandre Alarcon Do Passo Suaide,Michal Sumbera,B. J. Summa,X. M. Sun,Yi Sun,Bernd Surrow,D. N. Svirida,Z.W. Sweger,P. Szymanski,A. H. Tang,Z. Tang,A. Taranenko,T. Tarnowsky,J. H. Thomas,Anthony Robert Timmins,D. Tlusty,T. Todoroki,M. Tokarev,Catherine Tomkiel,S. Trentalange,R. E. Tribble,Prithwish Tribedy,S. K. Tripathy,T. Truhlar,Barbara Antonina Trzeciak,O. D. Tsai,Z. Tu,T. Ullrich,D. G. Underwood,Isaac Upsal,G. Van Buren,Jan Vanek,Andrey Vasiliev,I. Vassiliev,V Verkest,F. Videbæk,S. Vokal,Sergey Voloshin,Fei Wang,Ge Wang,J. S. Wang,P Wang,Yu Wang,Z. Wang,R. C. Webb,P. Weidenkaff,Lei Wen,G. D. Westfall,H. H. Wieman,S. W. Wissink,J Wu,Y. Wu,B. Xi,Z.G. Xiao,Guannan Xie,Wei Xie,H. S. Xu,N. Xu,Q. Xu,Y. F. Xu,Z. Xu,C. Yang,Qi Yang,S. Yang,Yi Yang,Z. Ye,Liang Yi,K. Yip,Hanna Paulina Zbroszczyk,W. Zha,C. Zhang,DF Zhang,J. Y. Zhang,Shuang Zhang,X. Zhang,Yu Zhang,Z. J. Zhang,Z. P. Zhang,J. K. Zhao,C. Zhou,Xi Zhu,Maria Zurek,Maksym Zyzak +377 more
TL;DR: In this paper , a blind analysis of a large data sample of approximately 3.8 billion isobar collisions of Ru4496+Ru4496 and Zr4096+Zr 4096 at sNN=200 GeV was performed.
Proceedings ArticleDOI
Contextual spoken language understanding using recurrent neural networks
TL;DR: The proposed method obtains new state-of-the-art results on ATIS and improved performances over baseline techniques such as conditional random fields (CRFs) on a large context-sensitive SLU dataset.
Posted Content
Emformer: Efficient Memory Transformer Based Acoustic Model For Low Latency Streaming Speech Recognition
Yangyang Shi,Yongqiang Wang,Chunyang Wu,Ching-Feng Yeh,Julian Chan,Frank Zhang,Duc Le,Michael L. Seltzer +7 more
TL;DR: An efficient memory transformer Emformer for low latency streaming speech recognition where the long-range history context is distilled into an augmented memory bank to reduce self-attention’s computation complexity.