scispace - formally typeset
Search or ask a question

Showing papers by "Renmin University of China published in 2002"


Journal ArticleDOI
TL;DR: The extension of introducing additional preference cones to the previously developed inverse DEA model allows the decision makers to incorporate their preferences or important policies over inputs/outputs into the production analysis and resource allocation process.

136 citations


Journal ArticleDOI
TL;DR: The primary objective of this study is to investigate the possibility of including more temporal and spatial information on short-term inflow forecasting, which is not easily attained in the traditional time-series models or conceptual hydrological models.
Abstract: The primary objective of this study is to investigate the possibility of including more temporal and spatial information on short-term inflow forecasting, which is not easily attained in the traditional time-series models or conceptual hydrological models. In order to achieve this objective, an artificial neural network (ANN) model for short-term inflow forecasting is developed and several issues associated with the use of an ANN model are examined in this study. The formulated ANN model is used to forecast 1- to 7-h ahead inflows into a hydropower reservoir. The root-mean-squared error (RMSE), the Nash–Sutcliffe coefficient (NSC), the A information criterion (AIC), B information criterion (BIC) of the 1- to 7-h ahead forecasts, and the cross-correlation coefficient between the forecast and observed inflows are estimated. Model performance is analysed and some quantitative analysis is presented. The results obtained are satisfactory. Perceived strengths of the ANN model are the capability for representing complex and non-linear relationships as well as being able to include more information in the model easily. Although the results obtained may not be universal, they are expected to reveal some possible problems in ANN models and provide some helpful insights in the development and application of ANN models in the field of hydrology and water resources. Copyright © 2002 John Wiley & Sons, Ltd.

80 citations


Journal ArticleDOI
TL;DR: The authors explored a model of the relationships between negotiators' perceptions of the negotiation situation, their behavior, and negotiation outcomes, using data collected in Canada and China, and found that Canadian negotiators put more weight on their individual economic gains from negotiation.
Abstract: This study explores a model of the relationships between negotiators' perceptions of the negotiation situation, their behavior, and negotiation outcomes, using data collected in Canada and China. The results show that while Chinese negotiators are more concerned with maintaining good relations in the negotiation process, Canadian negotiators put more weight on their individual economic gains from negotiation. This result suggests a difference in a key work-related value: individualism/collectivism. Furthermore, Canadian negotiators' perceptions have less influence on their behavior than those of their Chinese counterparts. This could be explained by the fact that in a high-context culture like China, people's perceptions of the environment play an important role in how they behave.

36 citations


Journal ArticleDOI
TL;DR: The challenge of emissions reporting is examined, required as part of both China's pollution levy system and emerging system for "total emissions control," and practical steps toward exposure-based regulation of particulates are discussed.

33 citations


Journal ArticleDOI
TL;DR: The preliminary experiments indicate that the proposed semi-automatic approach to extracting data from HTML pages is not only easy to use but also able to produce a wrapper that extracts required data from inputted pages with high accuracy.
Abstract: With the development of the Internet, the World Wide Web has become an invaluable information source for most organizations. However, most documents available from the Web are in HTML form which is originally designed for document formatting with little consideration of its contents. Effectively extracting data from such documents remains a non-trivial task. In this paper, we present a schema-guided approach to extracting data from HTML pages. Under the approach, the user defines a schema specifying what to be extracted and provides sample mappings between the schema and the HTML page. The system will induce the mapping rules and generate a wrapper that takes the HTML page as input and produces the required data in the form of XML conforming to the user-defined schema. A prototype system implementing the approach has been developed. The preliminary experiments indicate that the proposed semi-automatic approach is not only easy to use but also able to produce a wrapper that extracts required data from inputted pages with high accuracy.

22 citations


Journal ArticleDOI
TL;DR: This paper presents a novel method that provides approximate answers to OLAP queries based on building a compressed data cubes by a clustering technique and using this compressed data cube to provide answers to queries directly, so it improves the performance of the queries.
Abstract: Approximate query processing has emerged as an approach to dealing with the huge data volume and complex queries in the environment of data warehouse. In this paper, we present a novel method that provides approximate answers to OLAP queries. Our method is based on building a compressed (approximate) data cube by a clustering technique and using this compressed data cube to provide answers to queries directly, so it improves the performance of the queries. We also provide the algorithm of the OLAP queries and the confidence intervals of query results. An extensive experimental study with the OLAP council benchmark shows the effectiveness and scalability of our cluster-based approach compared to sampling.

21 citations


Journal ArticleDOI
TL;DR: This paper defines a set of ‘very worst preference order’, which is independent of the selection of optimal solutions, and proves that compromise weights can be achieved within a finite number of adjustments on preference orders.
Abstract: In our early work, we described a minimax principle based procedure of preference adjustments with a finite number of steps to find compromise weights. The weights are obtained by solving a linear programming (LP) problem. The objective value of the LP is called a compromise index. When the index is non-negative, compromise weights are determined; otherwise we identify a set of ‘worst preference orders’ according to the optimal solution of the LP, for adjustment. However, this ‘worst preference order’ set depends on the selection of corresponding optimal solutions. This may have a negative impact on the decision-making procedure. This paper thoroughly discusses the problem of the existence of multiple optimal solutions. We define a set of ‘very worst preference order’, which is independent of the selection of optimal solutions. We prove that compromise weights can be achieved within a finite number of adjustments on preference orders. Numerical examples are given for illustration.

20 citations


Proceedings ArticleDOI
07 Aug 2002
TL;DR: The results show that, with the refining process, the SG-WRAP system can generate wrappers with very high accuracy and the efficiency tests indicated that the wrapper generation process is fast enough even with large size Web pages.
Abstract: Although wrapper generation work has been reported in the literature, there seem no standard ways to evaluate the performance of such systems. We conducted a series of experiments to evaluate the usability, correctness and efficiency of SG-WRAP. The usability tests selected a number of users to use the system. The results indicated that, with minimal introduction of the system, DTD definition and structure of HTML pages, even naive users could quickly generate wrappers without much difficulty. For correctness, we adapted the precision and recall metrics in information retrieval to data extraction. The results show that, with the refining process, the system can generate wrappers with very high accuracy. Finally, the efficiency tests indicated that the wrapper generation process is fast enough even with large size Web pages.

19 citations


Journal ArticleDOI
TL;DR: This work considers to embed a local search method into a global search method to obtain a locally optimal solution which can provide a better bound and help us to trim more branches in a branch-and-bound algorithm for solving QPLCC.

19 citations


Journal ArticleDOI
TL;DR: Experimental results show that the TLRSP model provides an efficient support for replicated mobile database systems by reducing reprocessing overhead and maintaining database consistency.
Abstract: In mobile database systems, mobility of users has a significant impact on data replication. As a result, the various replica control protocols that exist today in traditional distributed and multidatabase environments are no longer suitable. To solve this problem, a new mobile database replication scheme, the Transaction-Level Result-Set Propagation (TLRSP) model, is put forward in this paper. The conflict detection and resolution strategy based on TLRSP is discussed in detail, and the implementation algorithm is proposed. In order to compare the performance of the TLRSP model with that of other mobile replication schemes, we have developed a detailed simulation model. Experimental results show that the TLRSP model provides an efficient support for replicated mobile database systems by reducing reprocessing overhead and maintaining database consistency.

12 citations


Journal ArticleDOI
TL;DR: In this article, the authors employ EGARCH models, dynamic causality tests, and VAR-based forecast error decompositions using daily data of a recent sample period that includes the Asian financial crisis of 1997 and up to April 20, 2001, and find that there is strong evidence of lagged returns and volatility spillovers from the NASDAQ market to the Asian second board markets when they exclude contemporaneous main board market returns.
Abstract: In Asia, NASDAQ's success has helped prompt Singapore (SESDAQ), Japan (JASDAQ), Taiwan (TAISDAQ) and South Korea (KOSDAQ) to set up or formalize their own second board markets in the 1980s and early 1990s. In 1999, Malaysia (MESDAQ) and Hong Kong (GEM) also set up their second board markets. Given the growing importance of these second board markets, we examine whether there is any evidence of spillovers from NASDAQ returns and volatilities to Asian second board market returns and volatilities and whether the cross-country spillovers are strong relative to domestic spillovers from the corresponding main board markets. For this purpose, we employ EGARCH models, dynamic causality tests, and VAR-based forecast error decompositions using daily data of a recent sample period that includes the Asian financial crisis of 1997 and up to April 20, 2001. We find that, first, there is strong evidence of lagged returns and volatility spillovers from the NASDAQ market to the Asian second board markets when we exclude contemporaneous main board market returns. Second, there is strong evidence of contemporaneous and lagged returns and volatility spillovers from the local main board markets to the corresponding second board markets. However, even in the presence of contemporaneous main board market returns, there remain substantial spillovers from the lagged NASDAQ returns and volatilities to Asian second board market returns and volatilities. These findings are not sensitive to whether we use U.S. dollar-based data or local currency-based data. Given the difference in the trading hours between the NASDAQ and Asian stock markets, we attempt to alleviate this concern by using some available intra-day return data and Canadian return data. The findings seem quite robust: There is substantial information spillover from the NASDAQ to Asian and Canadian second board markets. These findings indicate the existence of substantial cross-country industry effect (or meteor shower effect) as well as domestic market effect (or heat wave effect) and imply that both country diversification and industry diversification are important.

Journal ArticleDOI
TL;DR: A set of normative factors are proposed to guide the selection and application of various approaches for exposure assessment and key criteria are-compatability with the specific hypothesis being tested, and compatibility with the temporal and spatial scale of analysis.

Book ChapterDOI
12 Aug 2002
TL;DR: This paper proposes an algorithm to improve the effectiveness of k-NN by combining all relevant features firstly, and then assign a weight to each one, which achieves the highest accuracy or near to the high accuracy on all test datasets.
Abstract: The k-nearest neighbor (k-NN) classification is a simple and effective classification approach. However, it suffers from over-sensitivity problem due to irrelevant and noisy features. In this paper, we propose an algorithm to improve the effectiveness of k-NN by combining these two approaches. Specifically, we select all relevant features firstly, and then assign a weight to each one. Experimental results show that our algorithm achieves the highest accuracy or near to the highest accuracy on all test datasets. It also achieves higher generalization accuracy compared with the well-known algorithms IB1-4 and C4.5.

01 Jan 2002
TL;DR: Wang et al. as discussed by the authors compare the efficient markets hypothesis and fractals markets hypothesis from the point of linear and nonlinear in securities markets, and find that the yield distributions of day indices and week indices of Shanghai and Shenzhen stock exchanges are not normal distribution.
Abstract: Compare the efficient markets hypothesis and fractals markets hypothesis, indeed they are the different methods from the point of linear and nonlinear in securities markets. Efficient markets hypothesis corresponds to the normal distribution and fractals markets hypothesis corresponds to the distribution with "fat taill". We find that the yield distributions of day indices and week indices of Shanghai and Shenzhen stock exchanges are not normal distribution. But the month indices might different dependent on the different period of data, compare the yield distributions of week indices and day indices of Shanghai and Shenzhen stock exchanges, find the fractals in Chinese securities market.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the Moran arc, an arc containing a Moran set, is a Whitney's critical set, and that the Sierpinski gasket and Koch curve have a critical subset of full dimension.
Abstract: The problem is concerned about how large (e.g. the Hausdorff dimension) is Whitney's critical set contained in a given fractal. For this, we prove that the Moran arc, an arc containing a Moran set, is a Whitney's critical set. The excellent open set condition is defined, when the condition holds, the associated self-similar set contains a Whitney's critical subset of full dimension. As its application, the Sierpinski gasket and Koch curve have Whitney's critical subset of full dimension. Finally, we provide a self-similar tree which never contains any Whitney's critical set.

Journal ArticleDOI
TL;DR: In this article, an estimator of the survival function under the random censoring model is studied and a Bahadur-type representation of the estimator is obtained and asymptotic expression for its mean squared errors is given.
Abstract: We study an estimator of the survival function under the random censoring model. Bahadur-type representation of the estimator is obtained and asymptotic expression for its mean squared errors is given, which leads to the consistency and asymptotic normality of the estimator. A data-driven local bandwidth selection rule for the estimator is proposed. It is worth noting that the estimator is consistent at left boundary points, which contrasts with the cases of density and hazard rate estimation. A Monte Carlo comparison of different estimators is made and it appears that the proposed data-driven estimators have certain advantages over the common Kaplan-Meier estmator.

Journal ArticleDOI
TL;DR: In this paper, an algebra approach for solving the linearly constrained continuous quasi-concave minimization problems is proposed, based on the fact that the optimal solutions can be achieved at an extreme point of the polyhedron.
Abstract: This paper proposes an algebra approach for solving the linearly constrained continuous quasi-concave minimization problems. The study involves a class of very generalized concave functions, continuous strictly quasi-concave functions. Based on the fact that the optimal solutions can be achieved at an extreme point of the polyhedron, we provide an algebra-based method for identifying the extreme points. The case on unbounded polyhedral constraints is also discussed and solved. Numerical examples are provided for illustration.

Journal Article
Qu Bo1
TL;DR: The author designed the early warning index system and discussed the methods of early warning for land bubble which are tested by the data of Japan.
Abstract: It is necessary and feasible to establish an early-warning system for land bubble.Based on the analysis of causes and characters of land bubble,the author designed the early warning index system and discussed the methods of early warningforland bubble which are tested by the data of Japan.

Journal ArticleDOI
TL;DR: In this paper, the comparison theorem for generalized backward stochastic differential equations is discussed, and some topics related to equations of this type are also investigated, such as the relation between generalized backward differential equations and generalized backward linear equations.
Abstract: The comparison theorem for generalized backward stochastic differential equations is discussed. Some topics related to equations of this type are also investigated.

Journal Article
TL;DR: Results of analysis indicate that abnormal sex ratios of women's children ever born and changed sex combination of children in families caused by sex preference are another causes of abnormal sex ratio at birth of the population.
Abstract: The paper attempts to address some of the issues related to the change in sex composition of children for families during fertility transition and main factors affecting this change. Results of analysis indicate that abnormal sex ratios of women's children ever born and changed sex combination of children in families caused by sex preference are another causes of abnormal sex ratio at birth of the population.

Journal ArticleDOI
TL;DR: This paper presents a personalized information dissemination model based on How-Net, which uses a Concept Network-Views (CN-V) model to support information filtering, user’s interests modeling and information recommendation.
Abstract: The information dissemination model is becoming increasingly important in wide-area information systems. In this model, a user subscribes to an information dissemination service by submitting profiles that describe his interests. There have been several simple kinds of information dissemination services on the Internet such as mailing list, but the problem is that it provides a crude granularity of interest matching. A user whose information need does not exactly match certain lists will either receive too many irrelevant or too few relevent messages. This paper presents a personalized Information dissemination model based on How-Net, which uses a Concept Network-views(CN-V), model to support information filtering, user's interests, modeling and information recommendation. A Concept Network is constructed upon the user's profiles and the content of documents, which describes concepts and their relations in the content and assigns different weights to these concepts. Usually the Concept Network is not well arranged, from which it is hard to find some useful relations, so several views from are extracted it to represent the important relations explicitly.

Journal Article
TL;DR: Strategy design focusing on process is emerging, which includes value chain's design, business process's design and operating process' design.
Abstract: Strategy design focusing on process is emerging. It has three characterstics: ①focusing on process; ②a continuing and dynamic process; ③it requires process matching with strategy. It includes value chain's design, business process' design and operating process' design.

01 Jan 2002
TL;DR: This paper proves that the measure of information discrepancy is a distance function and shows that it is also an approximation of χ2 function, which will stimulate further applications of the measure to information processing and system analysis.
Abstract: Based on a group of axioms, a measure of information discrepancyamong multiple information sources has been introduced in and it possesses some peculiar properties compared with other measures of information discrepancy, so it can be used in some areas, where the traditional measures are not valid or not efficient, for example, in the study of DNA sequence comparison, prediction of protein structure class, evidence analysis, questionnaire analysis, and so on In this paper, using the optimization techniques, we prove that it is a distance function and show that it is also an approximation of χ2 function These two properties will stimulate further applications of the measure to information processing and system analysis

Journal ArticleDOI
TL;DR: This paper proposes an algorithm to improve the effectiveness of k-NN by combining two approaches to select all relevant features firstly, and then assign a weight to each relevant feature.
Abstract: The k-nearest neighbor (k-NN) classification is a simple and effective classification approach. However, it suffers from over-sensitivity problem due to irrelevant and noisy features. There are two ways to relax such sensitivity. One is to assign each feature a weight, and the other way is to select a subset of relevant features. Existing researches showed that both approaches can improve generalization accuracy, but it is impossible to predict which one is better for a specific dataset. In this paper, we propose an algorithm to improve the effectiveness of k-NN by combining these two approaches. Specifically, we select all relevant features firstly, and then assign a weight to each relevant feature. Experiments have been conducted on 14 datasets from the UCI Machine Learning Repository, and the results show that our algorithm achieves the highest accuracy or near to the highest accuracy on all test datasets. It increases generalization accuracy 8.68% on the average. It also achieves higher generalization accuracy compared with well-known machine learning algorithm IB1-4 and C4.5.

Journal Article
TL;DR: In this article, the authors consider not only the establishing of internal goods-flow and their programming, but also the consideration of social development demand and of characteristics of different firms in different firms.
Abstract: With the development of modern goods-flow industry, the goods-flow centers are becoming more and more important as the carriers of goods-flow activities. What we should notice is not only the establishing of internal goods-flow and their programming, but also the consideration of social development demand and of characteristics of different firms.

Journal ArticleDOI
TL;DR: In this article, a simple simple C * -algebra is considered, and for each a (≠ 0) in A, there exists a separable faithful and irreducible * representation on A such that π(a) has a non-trivial invariant subspace in H π.
Abstract: Let A be a separable simple C * -algebra. For each a (≠0) in A, there exists a separable faithful and irreducible * representation ( π, H π ) on A such that π(a) has a non-trivial invariant subspace in H π .

Journal ArticleDOI
TL;DR: In this article, the notion of weak solution for stochastic differential equation with terminal conditions is introduced and the equivalence of existence of weak solutions for two-type equations is established.
Abstract: The notion of weak solution for stochastic differential equation with terminal conditions is introduced. By Girsanov transformation, the equivalence of existence of weak solutions for two-type equations is established. Several sufficient conditions for the existence of the weak solutions for stochastic differential equation with terminal conditions are obtained, and the solution existence condition for this type of equations is relaxed. Finally, an example is given to show that the result is an essential extension of the one under Lipschitz condition ong with respect to (Y,Z).

Book ChapterDOI
06 May 2002
TL;DR: Two efficient constrained-cube construction algorithms, the NAIVE algorithm and the AGOA algorithm, were proposed and Experimental results indicate that this kind of constraint-based exploratory mining method is efficient and scalable.
Abstract: Analysts often explore data cubes to identify anomalous regions that may represent problem areas or new opportunities. Discovery-driven exploration (proposed by S. Sarawagi et al [5]) automatically detects and marks the exceptions for the user and reduces the reliance on manual discovery. However, when the data is large, it is hard to materialize the whole cube due to the limitations of both space and time. So, exploratory mining on complete cube cells needs to construct the data cube dynamically. That will take a very long time. In this paper, we investigate optimization methods by pushing several constraints into the mining process. By enforcing several user-defined constraints, we first restrict the multidimensional space to a small constrained-cube and then mine exceptions on it. Two efficient constrained-cube construction algorithms, the NAIVE algorithm and the AGOA algorithm, were proposed. Experimental results indicate that this kind of constraint-based exploratory mining method is efficient and scalable.