scispace - formally typeset
Search or ask a question
Author

Chyouhwa Chen

Other affiliations: Stony Brook University
Bio: Chyouhwa Chen is an academic researcher from National Taiwan University of Science and Technology. The author has contributed to research in topics: Multicast & Distance Vector Multicast Routing Protocol. The author has an hindex of 11, co-authored 34 publications receiving 407 citations. Previous affiliations of Chyouhwa Chen include Stony Brook University.

Papers
More filters
Proceedings ArticleDOI
10 Dec 2003
TL;DR: A fault tolerant Web service called fault-tolerant SOAP or FT-SOAP is proposed through which Web services can be built with higher resilience to failure and is based on the previous experience with an object fault tolerant service (OFS) and OMG's fault tolerant CORBA.
Abstract: Zwass (1996) suggested that middleware and message service is one of the five fundamental technologies used to realize electronic commerce (EC). The simple object access protocol (SOAP) is recognized as a more promising middleware for EC applications among other leading candidates such as CORBA. Many recent polls reveal however that security and reliability issues are major concerns that discourage people from engaging in EC transactions. We notice that the fault-tolerance issue is somewhat neglected in the current standard, i.e., SOAP 1.1. We therefore propose a fault tolerant Web service called fault-tolerant SOAP or FT-SOAP through which Web services can be built with higher resilience to failure. FT-SOAP is based on our previous experience with an object fault tolerant service (OFS) [Liang, D. et al., (1999)] and OMG's fault tolerant CORBA (FT-CORBA). There are many architectural differences between SOAP and CORBA. One of the major contributions of this work is to discuss the impact on FT-SOAP design due to these architectural differences. Our experience shows that Web services built on a SOAP framework enjoy higher flexibility as opposed to those built on CORBA. We also point out the limitations of the current feature sets of SOAP 1.1. We believe our experience is valuable not only to the fault-tolerance community, but also to other communities as well, in particular, to those who are familiar with the CORBA platform.

86 citations

Journal ArticleDOI
TL;DR: The Stony Brook SYNCHEM system is a large knowledge-based domain-specific heuristic problem-solving program that is able to find valid synthesis routes for organic molecules of substantial interest and complexity without online guidance on the part of its user.
Abstract: The Stony Brook SYNCHEM system is a large knowledge-based domain-specific heuristic problem-solving program that is able to find valid synthesis routes for organic molecules of substantial interest and complexity without online guidance on the part of its user. In common with many such AI performance programs, SYNCHEM requires a substantial knowledge base to make it routinely useful, but as the designers of most of these programs have discovered, it is vcry difficult to engage domain experts to the long-term dedication and intensity of commitment ncccssary to create a production-quality knowledge base. ISOLDE and TRISTAN are machine learning programs that use large computer-readable databases of specific reaction instances as a source of training examples for algorithms designed to extract the underlying reaction schemata via inductive and deductive generalization. ISOLDE learns principally by inductive generalization, while TRISTAN makes use of a methodology that is primarily deductive, and which is usually dcscribed as explanation-basrd Irarning. Since the individual reaction entries in most computer-rcadable databases are often haphazardly sorted and classified, a taxonomy program called BRANGANE has been written to partition the input databases into coherent reaction classes using the tncthodology of conceptual clustering.

77 citations

Journal ArticleDOI
TL;DR: This paper proposes addressing the virtual-server-based load balancing problem systematically using an optimization-based approach and derive an effective algorithm to rearrange loads among the peers and systematically characterize the effect of heterogeneity on load balancing algorithm performance.
Abstract: Application-layer peer-to-peer (P2P) networks are considered to be the most important development for next-generation Internet infrastructure. For these systems to be effective, load balancing among the peers is critical. Most structured P2P systems rely on ID-space partitioning schemes to solve the load imbalance problem and have been known to result in an imbalance factor of ominus(logN) in the zone sizes. This paper makes two contributions. First, we propose addressing the virtual-server-based load balancing problem systematically using an optimization-based approach and derive an effective algorithm to rearrange loads among the peers. We demonstrate the superior performance of our proposal in general and its advantages over previous strategies in particular. We also explore other important issues vital to the performance in the virtual server framework, such as the effect of the number of directories employed in the system and the performance ramification of user registration strategies. Second, and perhaps more significantly, we systematically characterize the effect of heterogeneity on load balancing algorithm performance and the conditions in which heterogeneity may be easy or hard to deal with based on an extensive study of a wide spectrum of load and capacity scenarios.

42 citations

Journal ArticleDOI
TL;DR: The preliminary experiments indicate that OFS overhead is minimal and client objects experience little response delay when a service object is under OFS surveillance.

32 citations

Journal ArticleDOI
TL;DR: A novel contrast enhancement method based on Gaussian mixture modeling of image histograms, which provides a sound theoretical underpinning of the partitioning process, and demonstrates the contrast enhancement advantage of the proposed method when compared to twelve state-of-the-art methods in the literature.
Abstract: The current major theme in contrast enhancement is to partition the input histogram into multiple sub-histograms before final equalization of each sub-histogram is performed. This paper presents a novel contrast enhancement method based on Gaussian mixture modeling of image histograms, which provides a sound theoretical underpinning of the partitioning process. Our method comprises five major steps. First, the number of Gaussian functions to be used in the model is determined using a cost function of input histogram partitioning. Then the parameters of a Gaussian mixture model are estimated to find the best fit to the input histogram under a threshold. A binary search strategy is then applied to find the intersection points between the Gaussian functions. The intersection points thus found are used to partition the input histogram into a new set of sub-histograms, on which the classical histogram equalization (HE) is performed. Finally, a brightness preservation operation is performed to adjust the histogram produced in the previous step into a final one. Based on three representative test images, the experimental results demonstrate the contrast enhancement advantage of the proposed method when compared to twelve state-of-the-art methods in the literature.

27 citations


Cited by
More filters
Journal ArticleDOI
01 Mar 2018-Nature
TL;DR: This work combines Monte Carlo tree search with an expansion policy network that guides the search, and a filter network to pre-select the most promising retrosynthetic steps that solve for almost twice as many molecules, thirty times faster than the traditional computer-aided search method.
Abstract: To plan the syntheses of small organic molecules, chemists use retrosynthesis, a problem-solving technique in which target molecules are recursively transformed into increasingly simpler precursors. Computer-aided retrosynthesis would be a valuable tool but at present it is slow and provides results of unsatisfactory quality. Here we use Monte Carlo tree search and symbolic artificial intelligence (AI) to discover retrosynthetic routes. We combined Monte Carlo tree search with an expansion policy network that guides the search, and a filter network to pre-select the most promising retrosynthetic steps. These deep neural networks were trained on essentially all reactions ever published in organic chemistry. Our system solves for almost twice as many molecules, thirty times faster than the traditional computer-aided search method, which is based on extracted rules and hand-designed heuristics. In a double-blind AB test, chemists on average considered our computer-generated routes to be equivalent to reported literature routes.

1,146 citations

Journal ArticleDOI
TL;DR: This work explores the use of neural networks for predicting reaction types, using a new reaction fingerprinting method and combines this predictor with SMARTS transformations to build a system which, given a set of reagents and reactants, predicts the likely products.
Abstract: Reaction prediction remains one of the major challenges for organic chemistry and is a prerequisite for efficient synthetic planning. It is desirable to develop algorithms that, like humans, “learn” from being exposed to examples of the application of the rules of organic chemistry. We explore the use of neural networks for predicting reaction types, using a new reaction fingerprinting method. We combine this predictor with SMARTS transformations to build a system which, given a set of reagents and reactants, predicts the likely products. We test this method on problems from a popular organic chemistry textbook.

331 citations

Journal ArticleDOI
TL;DR: In this paper, a fully data driven model that learns to perform a retrosynthetic reaction prediction task, which is treated as a sequence-to-sequence mapping problem, is presented.
Abstract: We describe a fully data driven model that learns to perform a retrosynthetic reaction prediction task, which is treated as a sequence-to-sequence mapping problem. The end-to-end trained model has an encoder–decoder architecture that consists of two recurrent neural networks, which has previously shown great success in solving other sequence-to-sequence prediction tasks such as machine translation. The model is trained on 50,000 experimental reaction examples from the United States patent literature, which span 10 broad reaction types that are commonly used by medicinal chemists. We find that our model performs comparably with a rule-based expert system baseline model, and also overcomes certain limitations associated with rule-based expert systems and with any machine learning approach that contains a rule-based expert system component. Our model provides an important first step toward solving the challenging problem of computational retrosynthetic analysis.

328 citations

Journal ArticleDOI
TL;DR: In this article, the authors explore the use of neural networks for predicting reaction types, using a new reaction fingerprinting method, and combine this predictor with SMARTS transformations to build a system which, given a set of reagents and re-actants, predicts the likely products.
Abstract: Reaction prediction remains one of the major challenges for organic chemistry, and is a pre-requisite for efficient synthetic planning. It is desirable to develop algorithms that, like humans, "learn" from being exposed to examples of the application of the rules of organic chemistry. We explore the use of neural networks for predicting reaction types, using a new reaction fingerprinting method. We combine this predictor with SMARTS transformations to build a system which, given a set of reagents and re- actants, predicts the likely products. We test this method on problems from a popular organic chemistry textbook.

250 citations

Journal ArticleDOI
TL;DR: Molecular similarity is demonstrated to be a surprisingly effective metric for proposing and ranking one-step retrosynthetic disconnections based on analogy to precedent reactions.
Abstract: We demonstrate molecular similarity to be a surprisingly effective metric for proposing and ranking one-step retrosynthetic disconnections based on analogy to precedent reactions. The developed approach mimics the retrosynthetic strategy defined implicitly by a corpus of known reactions without the need to encode any chemical knowledge. Using 40 000 reactions from the patent literature as a knowledge base, the recorded reactants are among the top 10 proposed precursors in 74.1% of 5000 test reactions, providing strong quantitative support for our methodology. Extension of the one-step strategy to multistep pathway planning is demonstrated and discussed for two exemplary drug products.

217 citations