scispace - formally typeset
Search or ask a question

Showing papers by "Renmin University of China published in 2004"


Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper investigated the extent and determinants of voluntary Internet-based corporate disclosures (ICD) by listed Chinese companies and found that the extent of voluntary ICD is positively and significantly related to firm size, and that the presentation format is associated with the employment of a Big 5 international audit firm and whether the firm is in the information technology industry.

592 citations


Journal ArticleDOI
TL;DR: This research identifies the necessary and sufficient conditions for the evidence of congestion, constant, increasing and decreasing returns to scale, and shows the exclusive relationships between the concepts of congestion and returns to Scale.

107 citations


Journal ArticleDOI
01 Jun 2004-Abacus
TL;DR: The authors investigated the role of political influence, as well as accounting tradition and the equity market, in China's recent changes in accounting regulation, and found that the Chinese government, in part self-motivated and under external pressure, has been active in developing accounting standards in harmony with international accounting standards.
Abstract: This article investigates the role of political influence, as well as accounting tradition and the equity market, in China's recent changes in accounting regulation. We find that the Chinese government, in part self-motivated and in part under external pressure, has been active in developing accounting standards in harmony with international accounting standards. However, it has retained a uniform accounting system in the Enterprise Accounting System issued in 2000 to accommodate the special circumstances of a transforming government, strong state-ownership, a weak accounting profession, a weak and imperfect equity market, and the inertial effect of accounting tradition and cultural factors. This article also contributes to existing models of accounting system classification by illustrating the need for considering political influence as a factor that affects the rate of transition towards full implementation of international accounting standards.

105 citations


Journal ArticleDOI
TL;DR: This article examined the informational role of the interaction between past returns and past trading volume in the prediction of cross-sectional returns over intermediate horizons in China's stock market, and found that low-volume stocks outperform high-volume stock, volume discounts are more pronounced for past winners than for past losers, low volume stocks experience return continuations, and high volume winners exhibit return reversals.
Abstract: We examine the informational role of the interaction between past returns and past trading volume in the prediction of cross-sectional returns over intermediate horizons in China's stock market. Our results show that low-volume stocks outperform high-volume stocks, volume discounts are more pronounced for past winners than for past losers, low-volume stocks experience return continuations, and high-volume winners exhibit return reversals. Our results are robust to risk adjustments relative to Fama and French's three-factor model, and to stock exchange as well as large stock sub-samples. Our findings are not entirely consistent with the literature, which are likely to result from the market characteristics, in particular, the short-sales prohibition and the dominance of individual investors in the market.

89 citations


Journal ArticleDOI
TL;DR: This article used the standard contrarian portfolio approach to examine short-horizon return predictability in 24 US futures markets and found strong evidence of weekly return reversals, similar to the findings from equity market studies.
Abstract: We use the standard contrarian portfolio approach to examine short-horizon return predictability in 24 US futures markets. We find strong evidence of weekly return reversals, similar to the findings from equity market studies. When interacting between past returns and lagged changes in trading activity (volume and/or open interest), we find that the profits to contrarian portfolio strategies are, on average, positively associated with lagged changes in trading volume, but negatively related to lagged changes in open interest. We also show that futures return predictability is more pronounced if interacting between past returns and lagged changes in both volume and open interest. Our results suggest that futures market overreaction exists, and both past prices and trading activity contain useful information about future market movements. These findings have implications for futures market efficiency and are useful for futures market participants, particularly commodity pool operators.

76 citations


Journal ArticleDOI
TL;DR: It is shown that the broadening of the impurity level leads to an additional and important contribution to the Fano resonance around the Fermi surface, especially in the mixed valence regime.
Abstract: We present a general theory for the Fano resonance in Anderson impurity systems. It is shown that the broadening of the impurity level leads to an additional and important contribution to the Fano resonance around the Fermi surface, especially in the mixed valence regime. This contribution results from the interference between the Kondo resonance and the broadened impurity level. Being applied to the scanning tunneling microscopic experiments, we find that our theory gives a consistent and quantitative account for the Fano resonance line shapes for both Co and Ti impurities on Au or Ag surfaces. The Ti systems are found to be in the mixed valence regime.

59 citations


Book ChapterDOI
25 Aug 2004
TL;DR: In this paper, a new approach to combine multiple KNN classifiers based on different distance funtions, in which they apply multiple distance functions to improve the performance of the k-nearest neighbor classifier.
Abstract: The k-nearest neighbor (KNN) classification is a simple and effective classification approach. However, improving performance of the classifier is still attractive. Combining multiple classifiers is an effective technique for improving accuracy. There are many general combining algorithms, such as Bagging, Boosting, or Error Correcting Output Coding that significantly improve the classifier such as decision trees, rule learners, or neural networks. Unfortunately, these combining methods do not improve the nearest neighbor classifiers. In this paper we present a new approach to combine multiple KNN classifiers based on different distance funtions, in which we apply multiple distance functions to improve the performance of the k-nearest neighbor classifier. The proposed algorithm seeks to increase generalization accuracy when compared to the basic k-nearest neighbor algorithm. Experiments have been conducted on some benchmark datasets from the UCI Machine Learning Repository. The results show that the proposed algorithm improves the performance of the k-nearest neighbor classification.

36 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper examined the relation between extreme trading volumes and expected returns for individual stocks traded on the Shanghai Stock Exchange and the Shenzhen Stock Exchange over the July 1994-December 2000 interval.
Abstract: We examine the relation between extreme trading volumes and expected returns for individual stocks traded on the Shanghai Stock Exchange and the Shenzhen Stock Exchange over the July 1994–December 2000 interval. Contrasted with the evidence obtained from the US data [J. Finance 56 (2001) 877], our results show that stocks experiencing extremely high (low) volumes are associated with low (high) subsequent returns. Moreover, this extreme volume–return relation significantly co-varies with security characteristics like past stock performance, firm size, and book-to-market values. In particular, stocks with extreme volumes are related to poorer performance if they are past winners, large firms, and glamour stocks than if they are past losers, small firms, and value stocks, respectively. These results are robust to both daily and weekly samples as well as stock exchange sub-samples. Although the liquidity premium hypothesis of Amihud and Mendelson [J. Financ. Econ. 17 (1986) 223] provides a partial explanation for the extreme volume–return relation, our results fit better the behavioral hypothesis of Baker and Stein [J. Financ. Mark. 7 (2004) 271].

27 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide a case study which shows that ISO 15489 can also be used for measuring records management performance as well, which can contribute to the identification of gaps between best practice, captured in the standard, and what is happening in reality in relation to records management policies, programmes, procedures and processes.
Abstract: ISO 15489 has provided best practice guidelines for records management which have many implications to the world. This paper, based on the results of a research project, provides a case study which shows that ISO 15489 can also be used for measuring records management performance as well. It can contribute to the identification of gaps between best practice, captured in the standard, and what is happening in reality in relation to records management policies, programmes, procedures and processes. It can then provide directions for further effective improvement. The author introduces records management in China and its features to provide the context and then measures records management in China against items of ISO 15489. Based on the weaknesses found, the author gives suggestions for the improvement of records management in China.

23 citations


Proceedings ArticleDOI
26 Aug 2004
TL;DR: An ontology learning approach is proposed, which uses WordNet lexicon resources to build a standard OWL ontology model, which can the automation of ontology building and be very useful in ontology-based applications.
Abstract: Ontology based approach has been popularized by current semantic Web researches. However, ontology building by hand has proven to be a very hard and error-prone task and become the bottleneck of ontology acquiring process. WordNet, an electronic lexical database, is considered to be the most important resource available to researchers in computational linguistics. The paper proposes an ontology learning approach, which uses WordNet lexicon resources to build a standard OWL ontology model. The approach can the automation of ontology building and be very useful in ontology-based applications.

22 citations


Posted Content
TL;DR: Li et al. as mentioned in this paper examine changes in operating performance of Chinese listed companies around their initial public offerings, and focus on the effect of ownership and ownership concentration on IPO performance changes.
Abstract: We examine changes in operating performance of Chinese listed companies around their initial public offerings, and focus on the effect of ownership and ownership concentration on IPO performance changes. We document a sharp decline in post-issue operating performance of IPO firms. We also find that neither state ownership nor concentration of ownership is associated with performance changes, but there is a curvilinear relation between legal-entity ownership and performance changes and between concentration of non-state ownership and performance changes. Our results are robust to different performance measures and industry adjustments. These findings suggest that agency conflicts, management entrenchment, and large shareholders' expropriation co-exist to influence Chinese IPO performance, and the beneficial and detrimental effects of state shareholdings tend to offset each other.

Proceedings ArticleDOI
22 Aug 2004
TL;DR: This paper develops efficient algorithms for maintaining a quotient cube with holistic aggregation functions that takes up reasonably small storage space and introduces two techniques called addset data structure and sliding window to deal with this problem.
Abstract: Data cube pre-computation is an important concept for supporting OLAP(Online Analytical Processing) and has been studied extensively. It is often not feasible to compute a complete data cube due to the huge storage requirement. Recently proposed quotient cube addressed this issue through a partitioning method that groups cube cells into equivalence partitions. Such an approach is not only useful for distributive aggregate functions such as SUM but can also be applied to the holistic aggregate functions like MEDIAN.Maintaining a data cube for holistic aggregation is a hard problem since its difficulty lies in the fact that history tuple values must be kept in order to compute the new aggregate when tuples are inserted or deleted. The quotient cube makes the problem harder since we also need to maintain the equivalence classes. In this paper, we introduce two techniques called addset data structure and sliding window to deal with this problem. We develop efficient algorithms for maintaining a quotient cube with holistic aggregation functions that takes up reasonably small storage space. Performance study shows that our algorithms are effective, efficient and scalable over large databases.

Journal ArticleDOI
TL;DR: The purpose of this presentation is to propose two new efficient algorithms to compute reducts in information systems based on the proposition of reduct and the relation between the reduCT and discernibility matrix, which improves the execution time when compared with the other methods.
Abstract: In the process of data mining of decision table using Rough Sets methodology, the main computational effort is associated with the determination of the reducts. Computing all reducts is a combinatorial NP-hard computational problem. Therefore the only way to achieve its faster execution is by providing an algorithm, with a better constant factor, which may solve this problem in reasonable time for real-life data sets. The purpose of this presentation is to propose two new efficient algorithms to compute reducts in information systems. The proposed algorithms are based on the proposition of reduct and the relation between the reduct and discernibility matrix. Experiments have been conducted on some real world domains in execution time. The results show it improves the execution time when compared with the other methods. In real application, we can combine the two proposed algorithms.

01 Jan 2004
TL;DR: This paper discusses the state of the art, the challenge problems that the authors face, and the future trends in database research field, which covers the hot topics such as information integration, stream data management, sensor database technology, XML datamanagement, data grid, self-adaptation, moving object management, small-footprint database, and user interface.
Abstract: This paper discusses the state of the art, the challenge problems that we face, and the future trends in database research field. It covers the hot topics such as information integration, stream data management, sensor database technology, XML data management, data grid, self-adaptation, moving object management, small-footprint database, and user interface.


Proceedings ArticleDOI
13 Jun 2004
TL;DR: XSeq is a powerful XML indexing infrastructure which makes tree patterns a first class citizen in XML query processing and achieves an additional performance advantage over methods indexing either just content or structure, or indexing them separately.
Abstract: Given a tree-pattern query, most XML indexing approaches decompose it into multiple sub-queries, and then join their results to provide the answer to the original query. Join operations have been identified as the most time-consuming component in XML query processing. XSeq is a powerful XML indexing infrastructure which makes tree patterns a first class citizen in XML query processing. Unlike most indexing methods that directly manipulate tree structures, XSeq builds its indexing infrastructure on a much simpler data model: sequences. That is, we represent both XML data and XML queries by structure-encoded sequences. We have shown that this new data representation preserves query equivalence, and more importantly, through subsequence matching, structured queries can be answered directly without resorting to expensive join operations. Moreover, the XSeq infrastructure unifies indices on both the content and the structure of XML documents, hence it achieves an additional performance advantage over methods indexing either just content or structure, or indexing them separately.

Journal ArticleDOI
TL;DR: The new version of the simulation platform, SIMEC3.0 is used as an example to describe how to design and implement an e-commerce simulation platform and how to use program languages to implement it.
Abstract: In this paper, we describe how to design the core functions of an e-commerce simulation platform and how to use program languages to implement it. Since last July up to now, we have been working on the new version of the simulation platform, SIMEC3.0 in our Economy and Science Lab. In our project, we use the leading-edge object-oriented language JAVA and a subset of J2EE framework as the technical architecture. XML is used in the platform to store and exchange all kinds of commerce documents. Because all the research on building an e-commerce simulation platform has much to do with the development of SIMEC, in this paper, we will use SIMEC3.0 as an example to describe how to design and implement an e-commerce simulation platform.

Journal ArticleDOI
TL;DR: In this paper, a new method of parametric estimate, which is named as synthesized expected Bayesian method, was developed and used to estimate failure probability, failure rate and some other parameters in exponential distribution and Weibull distribution of populations.
Abstract: This paper develops a new method of parametric estimate, which is named as “synthesized expected Bayesian method”. When samples of products are tested and no failur events occur, the definition of expected Bayesian estimate is introduced and the estimates of failure probability and failure rate are provided. After some failure information is introduced by making an extra-test, a synthesized expected Bayesian method is defined and used to estimate failure probability, failure rate and some other parameters in exponential distribution and Weibull distribution of populations. Finally, calculations are performed according to practical problems, which show that the synthesized expected Bayesian method is feasible and easy to operate.

Journal ArticleDOI
TL;DR: In this paper, a parametric eclectic model has been developed for a non-linearly decreasing demand pattern, which can be used to generate alternative replenishment plans if the parameter vector is adjusted within a certain range.

Journal ArticleDOI
TL;DR: The paper examines three management models of HIDZ: government-oriented, enterprise-oriented and comprehensive management models and explores HidZ roles on Chinese economic development.
Abstract: This paper focuses on the development of hi-tech Industrial Development Zones in China. Following the historical review and simple comparison of Economic-Technological Development Area (ETDA) and Hi-tech Industrial Development Zone (HIDZ), it examines three management models of HIDZ: government-oriented, enterprise-oriented and comprehensive management models. Furthermore, the paper explores HIDZ roles on Chinese economic development. Finally, it summarily analyses the new four trends of HIDZ in China.


Journal Article
TL;DR: Wang et al. as mentioned in this paper proposed that China should create suitable conditions to accelerate enterprises to bear the Corporate Social Responsibility, which plays a unique role in promoting sustainable development, to some extent, it can remedy the defects caused by government and market.

Journal ArticleDOI
TL;DR: A new algorithm for solving mathematical programs with linear complementarity constraints is proposed that is globally convergent without requiring strong assumptions such as nondegeneracy or linear independence condition.
Abstract: In this paper, we propose a new algorithm for solving mathematical programs with linear complementarity constraints. The algorithm uses a method of approximately active search and introduces the idea of acceptable descent face. The main advantage of the new algorithm is that it is globally convergent without requiring strong assumptions such as nondegeneracy or linear independence condition. Numerical results are presented to show the effectiveness of the algorithm.

Journal ArticleDOI
TL;DR: In this article, incremental algorithms are designed to update existing quotient cube efficiently based on Galois lattice and performance study shows that these algorithms are efficient and scalable for large databases.
Abstract: Data cube computation is a well-known expensive operation and has been studied extensively. It is often not feasible to compute a complete data cube due to the huge storage requirement, Recently proposed quotient cube addressed this fundamental issue through a partitioning method that groups cube cells into equivalent partitions. The effectiveness and efficiency of the quotient cube for cube compression and computation have been proved. However, as changes are made to the data sources, to maintain such a quotient cube is non-trivial since the equivalent classes in it must be split or merged. In this paper, incremental algorithms are designed to update existing quotient cube efficiently based on Galois lattice. Performance study shows that these algorithms are efficient and scalable for large databases.

Journal ArticleDOI
TL;DR: The authors examined the performance of technical trading rules in the emerging Chinese stock markets and found significant evidence to support the predictability and profitability of technical rules for Chinese foreign B-shares but not for domestic A-share.
Abstract: We examine the performance of technical trading rules in the emerging Chinese stock markets. After controlling for non-synchronous trading and transaction costs, we find significant evidence to support the predictability and profitability of technical rules for Chinese foreign B-shares but not for domestic A-shares. The index returns of B-shares can be explained by one-day-lagged own market trading signals, but not by the trading signals emitted from the corresponding A-share market or from the U.S. market (a proxy for the international market). However, after February 19, 2001, when domestic investors were allowed to trade B-shares, the predictive power of the trading rules in B-share markets disappeared. We conclude that the predictability of technical trading rules in B-share markets can be attributed to the gradual diffusion of information among foreign investors under the foreign share ownership restriction, and, partly, to positive autocorrelations induced by thin trading.

Journal ArticleDOI
TL;DR: A novel solution for processing large quantities of electronic documents in multiple formats within a short timeframe based on Web services for integrating two-tiered distributed processing and involves a document extraction process for handling multiple document formats.
Abstract: Document management plays an important role in R&D project management for government funding agencies, universities, and research institutions. The advent of Web services and XML presents new opportunities for e-document management. This paper describes a novel solution for processing large quantities of electronic documents in multiple formats within a short timeframe. The solution is based on Web services for integrating two-tiered distributed processing. It also involves a document extraction process for handling multiple document formats, with XML as the intermediate for information exchange. The application of the solution at the National Natural Science Foundation of China (NSFC) proved successful, and the general approach may be applied to a broad range of e-document management settings.

Journal Article
TL;DR: In this paper, the authors made a scientific clarification about grain and food security and a detailed introduction about evaluation methodology of food security situation developed by the Food and Agriculture Organization (FAO) and Economic Research Bureau of USDA.
Abstract: This paper makes a scientific clarification about grain and food security and a detailed introduction about evaluation methodology of food security situation developed by the Food and Agriculture Organization (FAO) and Economic Research Bureau of USDA. The methods used by domestic economists are discussed as well. The paper then developed a new evaluation indicator system of food security. The coefficient of food security of major producer or consumer countries in the same period is calculated based on the method and the results shows that China has enjoyed a high level food security at present.

Journal Article
TL;DR: Wang et al. as discussed by the authors employed three methods, i.e., average production function (APF), frontier production function and data envelopment analysis (DEA), to quantitatively assess the efficiency of China's urban wastewater treatment plants (WWTPs) and identify the critical factors affecting the efficiency.
Abstract: Three methods i.e., average production function (APF), frontier production function (FPF) and data envelopment analysis (DEA) model, are employed to quantitatively assess the efficiency of China抯 urban wastewater treatment plants (WWTPs) and to identify the critical factors affecting the efficiency. Based on the data from 81 surveyed WTTPs, the three models are solved and some variables are gained. The examined variables cover the production elasticity of input factors, technological efficiency, return rate to scale, and so on. The study finds that capital is the factor with the highest production elasticity (0.507), which is followed by electricity with the production elasticity of 0.415, and labor is the factor with lowest production elasticity. The average rate to return derived from the APF model is 1.11, which shows that there is high potential to increase the profit by augmenting the inputs. Based on the results from DEA model, the relative efficiency of each sample WTTP is gained and a full WTTP list can be ordered by efficiency, which would be helpful to establish the benchmark standard for efficiency regulation.


Journal Article
TL;DR: The work relevant to database system on the grid was introduced and topics in grid database which include grid DBMS, grid database integration, and new requirements of grid applications were discussed.
Abstract: GND computing is an important new technology. Database System is needed for managing a large number of data on the grid. The work relevant to database system on the grid was introduced and topics in grid database which include grid DBMS,grid database integration,and new requirements of grid applications were discussed. It is suggested that the database researchers do more research on grid database to finding and resolving the new problems caused by grid applications.