scispace - formally typeset
Search or ask a question

Showing papers on "Database transaction published in 1988"


Journal ArticleDOI
TL;DR: In this article, the authors develop a theoretical extension to the basic transaction cost model by combining insights from dependence theory with the TCA approach, and introduce offsetting investments as a means of reducing transaction costs.
Abstract: The authors develop a theoretical extension to the basic transaction cost model by combining insights from dependence theory with the TCA approach. They introduce offsetting investments as a means ...

1,526 citations


Book
01 Jun 1988
TL;DR: Some areas which require further study are described: the integration of the transaction concept with the notion of abstract data type, some techniques to allow transactions to be composed of sub- transactions, and handles which last for extremely long times.
Abstract: A transaction is a transformation of state which has the properties of atomicity (all or nothing), durability (effects survive failures) and consistency (a correct transformation). The transaction concept is key to the structuring of data management applications. The concept may have applicability to programming systems in general. This paper restates the transaction concepts and attempts to put several implementation approaches in perspective. It then describes some areas which require further study: (1) the integration of the transaction concept with the notion of abstract data type, (2) some techniques to allow transactions to be composed of sub- transactions, and (3) handling transactions which last for extremely long times (days or months).

759 citations


Patent
29 Jun 1988
TL;DR: In this paper, a transaction terminal for use with diverse credit or other transaction cards is provided with an interface unit for receiving a plurality of modules, wherein each module contains programming information corresponding to transactions that may be carried out with at least one of the diverse identification cards presented to the machine.
Abstract: A transaction terminal for use with diverse credit or other transaction cards is provided with an interface unit for receiving a plurality of modules, wherein each module contains programming information corresponding to transactions that may be carried out with at least one of the diverse identification cards presented to the machine. Accordingly, financial institutions who issue cards can independently arrange and program their own security and transaction routines which are to be carried out with the cards and distribute such routines in a secure manner for use on a common terminal.

390 citations


Patent
17 Feb 1988
TL;DR: A central computer equipped with communications hardware and specially designed software receives transaction data from personal transaction stations operated by traders, sends back verification information to the traders, reconciles all trades, informs traders when an error occurs, generates complete records of all transactions, reports price and volume data to quote vendors, provides numerous reports which analyze trading activity to detect potential regulatory violations, creates a complete real-time backup copy of all data, and provides intraday profit, loss, risk, and margin information to exchange and Futures Commission Merchant personnel.
Abstract: A central computer equipped with communications hardware and specially designed software receives transaction data from personal transaction stations operated by traders, sends back verification information to the traders, reconciles all trades, informs traders when an error occurs, generates complete records of all transactions, reports price and volume data to quote vendors, provides numerous reports which analyze trading activity to detect potential regulatory violations, creates a complete real-time backup copy of all data, and provides intraday profit, loss, risk, and margin information to exchange and Futures Commission Merchant personnel.

319 citations


Proceedings ArticleDOI
29 Aug 1988
TL;DR: In this article, the authors propose splittransaction, a new database operation that allows transactions to commit data that will not change and serialize interactions with other concurrent activities through the committed data.
Abstract: Open-ended activities such as CAD/CAM, VLSI layout and software development require consistent concurrent access and fault tolerance associated with database transactions, but their uncertain duration, uncertain developments during execution and long interactions with other concurrent activities break traditional transaction atomicity boundaries. We propose splittransaction as a new database operation that solves the above problems by permitting transactions to commit data that will not change. Thus an open-ended activity can release the committed data and serialize interactions with other concurrent activities through the committed data.

234 citations


Journal ArticleDOI
TL;DR: In this paper, a general analytical framework for characterizing contingent valuation studies is proposed, which is suitable for interpreting actual transactions as well as for creating hypothetical transactions for research purposes, and it is described both in general terms and with special application to one particular kind of transaction, where individuals estimate the value of possible changes in atmospheric visibility.
Abstract: People express their value for a good when they pay something for it. Interpretinggood andpayment very broadly, we offer a general analytical framework for characterizing such transactions. This framework is suitable for interpreting actual transactions as well as for creating hypothetical transactions for research purposes. It is described here both in general terms and with special application to one particular kind of transaction, contingent valuation studies in which individuals estimate the value of possible changes in atmospheric visibility. In these transactions, as in many others, risk (of undesired changes in visibility) is one principal feature; at least some uncertainty often surrounds other transaction features as well (For example: How much will visibility really change if I promise to pay for it? Will I really have to pay?). The framework presented here conceptualizes any transaction as involving (a) a good, (b) a payment, and (c) a social context within which the transaction is conducted. Each of these aspects in turn has a variety of features that might and in some cases should affect evaluations. For each such feature, the framework considers first the meaning of alternative specifications and then the difficulties of ensuring that they are understood and evaluated properly. As a whole, the framework provides an integrated approach to designing evaluation studies and interpreting their results.

232 citations


Book ChapterDOI
01 Sep 1988
TL;DR: It is argued that ECA rules should be thought of as first class objects in an object-oriented data model and the association of timing constraints and contingency plans with rules is discussed.
Abstract: Event-Condition-Action (ECA) Rules are proposed as a general mechanism for providing active database capabilities in support of applications that require timely response to critical situations. These rules generalize mechanisms such as assertions, triggers, alerters, database procedures, and production rules that have previously been proposed for supporting such DBMS functions as integrity control, access control, derived data management. and inferencing. This paper argues that ECA rules should be thought of as first class objects in an object-oriented data model. It identifies concepts for modelling the components and properties of rule objects: events (database operations, temporal events, abstract signals from arbitrary user processes, and complex events constructed from these primitive ones); conditions (queries over the database): actions (programs in the query language or some programming language); and coupling modes (which describe whether the event, condition, and action components of a rule should be executed in a single transaction or in separate transactions). The paper discusses the association of timing constraints and contingency plans with rules. Finally, it describes operations on rule objects. The emphasis of the paper is on modelling concepts, rather than on specific syntax.

224 citations


Journal ArticleDOI
01 Mar 1988
TL;DR: This paper discusses solutions for two problems: what is a reasonable method for modeling real-time constraints for database transactions and time constraints add a new dimension to concurrency control.
Abstract: Scheduling transactions with real-time requirements presents many new problems. In this paper we discuss solutions for two of these problems: what is a reasonable method for modeling real-time constraints for database transactions? Traditional hard real-time constraints (e.g., deadlines) may be too limited. May transactions have soft deadlines and a more flexible model is needed to capture these soft time constraints. The second problem we address is scheduling. Time constraints add a new dimension to concurrency control. Not only must a schedule be serializable but it also should meet the time constraints of all the transactions in the schedule.

205 citations


Patent
26 Oct 1988
TL;DR: In this article, the authors describe a communication system where a base station, called a Controller, and one or more remote or satellite stations, each called a Communicator, arbitrates, controls and communicates with the communicators which are in range to receive its transmissions.
Abstract: A communication system includes a base station, called a "Controller" and one or more remote or satellite stations, each called a "Communicator". The controller arbitrates, controls and communicates with the communicators which are in range to receive its transmissions. The communicators receiving a particular controller's transmission form a network for that controller for the period in which reception occurs. The controller is the only generator of electromagnetic radiation which it modulates with information relating to its own identity, transactions it undertakes and information it transfers. Each communicator modulates and re-radiates the received transmission using back-scatter. Back-scatter re-radiation keeps the communicator design simple and allows for very sensitive receiver design in the controller. In operation, the controller initiates communication with each communicator by establishing, through a handshake exchange, the unique communications channel it will maintain with that communicator. Once channels are established, the controller repetitively polls each communicator for a sequential, cumulative interchange of data. The controller continually looks out for new communicators entering its network and de-activates polling of communicators whose transactions are complete. Whenever a network is not active because there are no communicators present, or any in need of a transaction or data interchange, the controller polls at a reduced rate.

203 citations


Proceedings ArticleDOI
18 Apr 1988
TL;DR: A formal security policy model that uses basic view concepts for a secure multilevel relational database system is described, and defines application-independent properties for entity integrity, referential integrity, and polyinstantiation integrity.
Abstract: A formal security policy model that uses basic view concepts for a secure multilevel relational database system is described. The model is formulated in two layers, one corresponding to a security kernel of reference monitor that enforces mandatory security, and the other defining multilevel relations and formalizing policies for labeling new and derived data, data consistency, discretionary security, and transaction consistency. This includes the policies for sanitization, aggregation, and downgrading. The model also defines application-independent properties for entity integrity, referential integrity, and polyinstantiation integrity. >

187 citations


Proceedings ArticleDOI
12 Dec 1988
TL;DR: The author describes a model and notation for specifying and enforcing aspects of integrity policies, particularly separation of duties, to associate a transaction control expression with each information object.
Abstract: The author describes a model and notation for specifying and enforcing aspects of integrity policies, particularly separation of duties. The key idea is to associate a transaction control expression with each information object. The transaction control expression constrains the pattern in which transactions can be executed on an object. As operations are actually executed the transaction control expressions gets converted to a history. This history serves to enforce separation of duties. Transient objects with a short lifetime are distinguished from persistent objects which are long-lived. Separation of duties is achieved by maintaining a complete history for transient objects but only a partial history for persistent objects. This is possible because of the system-enforced rule that transactions are executed on persistent objects only as a side effect of execution on transient objects. >

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the relation between investor wealth and the value of annual and quarterly earnings announcements as implied by the behavior of mean stock trade transaction sizes at announcement dates, and transaction size-stratified trading activity in postannouncement time periods.
Abstract: This paper empirically investigates the relation between investor wealth and the value of annual and quarterly earnings announcements as implied by (1) the behavior of mean stock trade transaction sizes at announcement dates, and (2) transaction size-stratified trading activity in postannouncement time periods. Announcement-period mean transaction sizes are found to exceed expected mean transactions sizes estimated from trading activity occurring in nonannouncement periods. This result is attributed to a greater relative trading response to earnings announcements by wealthier investors, consistent with Ohlson's [1975] proposition that information value increases with investor wealth in a security market setting. Further evidence of a positive relation between value and wealth is found when postearnings announcement stock transactions are stratified by share size into three strata, where the first stratum (small stratum) is transactions of 100 and 200 shares, the second stratum (large stratum) is transactions of 300 to 900 shares, and the third stratum (institutional

Journal ArticleDOI
01 Jun 1988
TL;DR: A generalized relational model for a temporal database which allows time stamping with respect to a Boolean algebra of multidimensional time stamps is proposed and is a promising approach for querying these errors and updates.
Abstract: We propose a generalized relational model for a temporal database which allows time stamping with respect to a Boolean algebra of multidimensional time stamps. The interplay between the various temporal dimensions is symmetric. As an application, a two dimensional model which allows objects with real world and transaction oriented time stamps is discussed. The two dimensional model can be used to query the past states of the database. It can also be used to give a precise classification of the errors and updates in a database, and is a promising approach for querying these errors and updates.

Journal ArticleDOI
01 Jun 1988
TL;DR: In this paper, the authors describe transaction management in ORION, an object-oriented database system that supports concurrency control mechanism based on extensions to the current theory of locking, and a transaction recovery scheme based on conventional logging.
Abstract: In this paper, we describe transaction management in ORION, an object-oriented database system. The application environments for which ORION is intended led us to implement the notions of sessions of transactions, and hypothetical transactions (transactions which always abort). The object-oriented data model which ORION implements complicates locking requirements. ORION supports a concurrency control mechanism based on extensions to the current theory of locking, and a transaction recovery mechanism based on conventional logging.

Patent
Wyner Spencer1
19 Sep 1988
TL;DR: In this paper, a usage promotion method for use with a payment card data interchange system is presented. But the method is not suitable for the use of credit card interchange systems, since it requires manual verification of transactions.
Abstract: A usage promotion method for use with a payment card data interchange system in which a computer examines card transaction data files to select transactions conforming to specified criteria. The computer eliminates arbitrarily all but a fraction of the selected transactions, selects winning transactions by using the bytes of the computer's system clock, and produces a verifiable list of winning transactions which, after manual verification, are used to create credits to the accounts of the card holders of the winning transactions.


Journal ArticleDOI
01 Jun 1988
TL;DR: This paper defines a transaction model that allows for several alternative notions of correctness without the requirement of serializability, and investigates classes of schedules for transactions that are richer than analogous classes under the classical model.
Abstract: In the classical approach to transaction processing, a concurrent execution is considered to be correct if it is equivalent to a non-concurrent schedule. This notion of correctness is called serializability. Serializability has proven to be a highly useful concept for transaction systems for data-processing style applications. Recent interest in applying database concepts to applications in computer-aided design, office information systems, etc. has resulted in transactions of relatively long duration. For such transactions, there are serious consequences to requiring serializability as the notion of correctness. Specifically, such systems either impose long-duration waits or require the abortion of long transactions. In this paper, we define a transaction model that allows for several alternative notions of correctness without the requirement of serializability. After introducing the model, we investigate classes of schedules for transactions. We show that these classes are richer than analogous classes under the classical model. Finally, we show the potential practicality of our model by describing protocols that permit a transaction manager to allow correct non-serializable executions

Patent
02 Nov 1988
TL;DR: In this paper, an interactive transaction system is provided where a user interacts with the system by means of a telephone which delivers output signals from the user and receives input signals from a transaction microprocessor.
Abstract: An interactive transaction system is provided. A user interacts with the system by means of a telephone which delivers output signals from the user and receives input signals from the system. The telephone is interfaced to a communication network through a switching unit. An account is provided from which the amount of the transaction is debited. A transaction microprocessor is interfaced to receive the input signals from the telephone which identify the transaction, the amount of the transaction and the user's personal identification code. The transaction microprocessor then communicates with the account microprocessor to authorize and complete the transaction.

Patent
Donald Nama1
01 Dec 1988
TL;DR: In this article, the authors proposed an automatic financial transaction surveillance system where an apparatus generates a video image signal that represents a financial transaction of a situation involving a financial transactions at an automatic teller machine or at a retail point-of-sale situation.
Abstract: The invention is directed to an automatic financial transaction surveillance system wherein an apparatus generates a video image signal that represents a financial transaction of a situation involving a financial transaction at an automatic teller machine or at a retail point-of-sale situation. A data entry mechanism is employed to generate an alphanumeric transaction information data signal that represents the nature of the transaction. A video camera arrangement generates a video image signal that represents a transaction of an actual situation in which the financial transaction is taking place. A transaction information module is responsively electrically coupled to the video image signal generating camera arrangement and to the data entry mechanism that generates the alphanumeric transaction information data signal. The transaction information module is responsive to the video image signal and the transaction information data signal to thereby provide a combined video image and transaction information output signal that may be utilized to generate a video image for subsequent study of the nature and content of the financial transaction.

Book ChapterDOI
14 Mar 1988
TL;DR: The fundamental theory of multi-level concurrency control and recovery is presented and it is shown how the theory helps to understand and explain in a systematic framework techniques that are in use in today's DBMSs.
Abstract: A useful approach to the design and description of complex data management systems is the decomposition of a system into a hierarchically organized collection of levels. In such a system, transaction management is distributed among the levels. This paper presents the fundamental theory of multi-level concurrency control and recovery. A model for the computation of multi-level transactions is introduced by generalizing from the well known single-level theory. Three basic principles, called commutation, reduction, and abstraction are explained. Using them enables one to explain and prove seemingly ”tricky” implementation techniques as correct, by regarding them as multi-level algorithms. We show how the theory helps to understand and explain in a systematic framework techniques that are in use in today's DBMSs. We also discuss how and why multi-level algorithms may achieve better performance than single-level ones.

Journal ArticleDOI
01 Mar 1988
TL;DR: It is felt that long communication delays may be a factor in limiting the performance of real-time distributed database systems, so a concurrency control algorithm is presented whose performance is not limited by communication delays.
Abstract: Real-time database systems support applications which have severe performance constraints such as fast response time and continued operation in the face of catastrophic failures. Real-time database systems are still in the state of infancy, and issues and alternatives in their design are not very well explored. In this paper, we discuss issues in the design of real-time database systems and discuss different alternatives for resolving these issues. We discuss the aspects in which requirements and design issues of real-time database systems differ from those of conventional database systems. We discuss two approaches to design real-time database systems, viz., main memory resident databases and design by trading a feature (like serializability).We also discuss requirements in the design of real-time distributed database systems, and specifically discuss issues in the design of concurrency control and crash recovery. It is felt that long communication delays may be a factor in limiting the performance of real-time distributed database systems. We present a concurrency control algorithm for real-time distributed database systems whose performance is not limited by communication delays.

Proceedings ArticleDOI
01 Jan 1988
TL;DR: The design issues in building a standby system with an up-to-date copy of the database in situations where the cost of a breach in service due to a disaster becomes financially intolerable are discussed.
Abstract: As computer systems process higher and higher volumes of economic transactions, the cost of a breach in service due to a disaster becomes financially intolerable. In situations like this, it becomes economically feasible to maintain a standby system with an up-to-date copy of the database. The design issues in building such a system are discussed. >


Journal ArticleDOI
01 Jun 1988
TL;DR: This paper proposes a multi-copy algorithm that works well in both retrieval and update environments by exploiting special application semantics by subdividing transactions into various categories, and utilizing a commutativity property.
Abstract: Data is often replicated in distributed database applications to improve availability and response time. Conventional multi-copy algorithms deliver fast response times and high availability for read-only transactions while sacrificing these goals for updates. In this paper, we propose a multi-copy algorithm that works well in both retrieval and update environments by exploiting special application semantics. By subdividing transactions into various categories, and utilizing a commutativity property, we demonstrate cheaper techniques and show that they guarantee correctness. A performance comparison between our techniques and conventional ones quantifies the extent of the savings.

Journal ArticleDOI
TL;DR: A large class of relational database update transactions is investigated with respect to equivalence and optimization, and a simple, natural subclass of transactions, called strongly acyclic, is shown to have particularly desirable properties.
Abstract: A large class of relational database update transactions is investigated with respect to equivalence and optimization. The transactions are straight-line programs with inserts, deletes, and modifications using simple selection conditions. Several basic results are obtained. It is shown that transaction equivalence can be decided in polynomial time. A number of optimality criteria for transactions are then proposed, as well as two normal forms. Polynomial-time algorithms for transaction optimization and normalization are exhibited. Also, an intuitively appealing system of axioms for proving transaction equivalence is introduced. Finally, a simple, natural subclass of transactions, called strongly acyclic, is shown to have particularly desirable properties.

Journal ArticleDOI
TL;DR: In the early 1950s Simon Kuznets questioned whether many types of service income in developed countries were misclassified as final income when they were, in fact, a type of intermediate product as mentioned in this paper.
Abstract: In the early 1950s Simon Kuznets questioned whether many types of service income in developed countries were misclassified as final income when they were, in fact, a type of intermediate product.' Several years ago, we began an investigation into the size of the transaction sector in the American economy to ascertain whether transaction costs were inappropriately classified as final product and should, therefore, be excluded from national income. When we found the transaction sector had grown from 25 percent of GNP in 1870 to 45 percent of GNP in 1970 the potential miscounting loomed larger than we expected. Our critics and commentators have suggested the Kuznetsian adjustment as the natural next step.2 Somewhat to our surprise, given the magnitude of the transaction sector estimates, we have since found that almost the entire transaction sector is already treated appropriately in the national accounts. The calculations are of interest in themselves, in terms of the composition of the transaction sector, and as empirical evidence for central hypotheses in the works of Oliver Williamson and Alfred D. Chandler. Table 1 presents the results of our earlier study. We measured the transaction sector by taking all the resources used in the "transaction industries" (wholesale and retail trade; and finance, insurance and real estate, FIRE) and adding wages paid employees in transaction-related occupations in all other industries, the "non-transaction" industries. These occupations encompass managers, supervisors, clerical workers, and employees in purchasing and marketing departments. Similar occupational classifications were used to calculate the size of the transaction sector within government. We found that 45 percent of the increase in the size of the transaction sector between 1870 and 1970 was due to increases in transaction services produced by transaction industries and sold in the market, 37 percent was due to transaction services produced and consumed within firms in non-transaction industries, while the remaining 8 percent was due to growth in government transaction services. Almost all growth in the transaction sector can be attributed to the private sector,

Book ChapterDOI
01 Jan 1988
TL;DR: The initial design of a main memory database (MMDB) backend database machine (DBM) is described, designed to provide quick recovery after transaction, system, or media failure, and to also provide efficient transaction processing.
Abstract: The initial design of a main memory database (MMDB) backend database machine (DBM) is described. This MAin memory Recoverable database with Stable log (MARS) is designed to provide quick recovery after transaction, system, or media failure, and to also provide efficient transaction processing.

Book
01 Aug 1988
TL;DR: This dissertation presents viewstamped replication, a new algorithm for the implementation of highly available computer services that continue to be usable in spite of node crashes and network partitions, based on a primary copy technique and integrated into the fabric of an atomic transaction mechanism.
Abstract: This dissertation presents viewstamped replication, a new algorithm for the implementation of highly available computer services that continue to be usable in spite of node crashes and network partitions. Our goal is to design an efficient mechanism that makes it easy for programmers to implement these services without complicating the programming model. Our replication method is based on a primary copy technique, where one replica is the primary and others are backups, and is integrated into the fabric of an atomic transaction mechanism. Transactions are run only at the primary and need not involve the backups; the primary propagates the effects of transaction processing to the backups in the background. The method exhibits low delay during normal operation, has low overhead, and increases the likelihood that transactions will commit in spite of failures. When failure occurs, replicas are reorganized automatically and a new primary is selected if the old one become inaccessible. This reorganization is called a view change and is accomplished by a view management algorithm. Since the primary only communicates with the backups in background mode, the effects of some processing may be lost after a view change; the affected transactions must abort. If the effects are known at the new primary, then no information is lost and the transaction can commit. Furthermore, if transactions commit, we guarantee that their effects are not lost. A special kind of timestamp, called a viewstamp, allows the algorithm to distinguish these cases inexpensively.

Journal ArticleDOI
TL;DR: The controlled-generator model of P.J. Ramadge and W.M. Wonham (1988) is used to formulate the concurrent execution of transactions in database systems as a control problem for a partially observed discrete-event dynamical system.
Abstract: The controlled-generator model of P.J. Ramadge and W.M. Wonham (1988) is used to formulate the concurrent execution of transactions in database systems as a control problem for a partially observed discrete-event dynamical system. The control objectives of this problem (for concurrency control and recovery) and the properties of some important transaction scheduling techniques are characterized in terms of the language generated by the controlled process and in terms of the stage of an ideal complete-information scheduler. Results about the performance of these techniques are presented. >

Journal ArticleDOI
01 Nov 1988
TL;DR: Different synchronization methods for replicated data in distributed database systems are classified by underlying mechanisms and the type of information they use in ordering the operations of transactions, and some of the replication management methods appeared in the literature are surveyed.
Abstract: Replication is the key factor in improving the availability of data in distributed systems. Replicated data is stored at multiple sites so that it can be accessed by the user even when some of the copies are not available due to site failures. A major restriction to using replication is that replicated copies must behave like a single copy, i.e., mutual consistency as well as internal consistency must be preserved. Synchronization techniques for replicated data in distributed database systems have been studied in order to increase the degree of concurrency and to reduce the possibility of transaction rollback. In this paper, we classify different synchronization methods by underlying mechanisms and the type of information they use in ordering the operations of transactions, and survey some of the replication management methods appeared in the literature.