scispace - formally typeset
Search or ask a question

Showing papers on "Serialization published in 2006"


Proceedings ArticleDOI
Mandana Vaziri1, Frank Tip1, Julian Dolby1
11 Jan 2006
TL;DR: This work presents a new definition of data races in terms of 11 problematic interleaving scenarios, and proves that it is complete by showing that any execution not exhibiting these scenarios is serializable for a chosen set of locations.
Abstract: Concurrency-related bugs may happen when multiple threads access shared data and interleave in ways that do not correspond to any sequential execution. Their absence is not guaranteed by the traditional notion of "data race" freedom. We present a new definition of data races in terms of 11 problematic interleaving scenarios, and prove that it is complete by showing that any execution not exhibiting these scenarios is serializable for a chosen set of locations. Our definition subsumes the traditional definition of a data race as well as high-level data races such as stale-value errors and inconsistent views. We also propose a language feature called atomic sets of locations, which lets programmers specify the existence of consistency properties between fields in objects, without specifying the properties themselves. We use static analysis to automatically infer those points in the code where synchronization is needed to avoid data races under our new definition. An important benefit of this approach is that, in general, far fewer annotations are required than is the case with existing approaches such as synchronized blocks or atomic sections. Our implementation successfully inferred the appropriate synchronization for a significant subset of Java's Standard Collections framework.

251 citations


Journal ArticleDOI
TL;DR: An XML-based interchange format for event-driven process chains (EPC) that is called EPC markup language (EPML), which builds on EPC syntax related work and is tailored to be a serialization format for EPC modelling tools.
Abstract: This article presents an XML-based interchange format for event-driven process chains (EPC) that is called EPC markup language (EPML). EPML builds on EPC syntax related work and is tailored to be a serialization format for EPC modelling tools. Design principles inspired by other standardization efforts and XML design guidelines have governed the specification of EPML. After giving an overview of EPML concepts we present examples to illustrate its features including flat and hierarchical EPCs, business views, graphical information, and syntactical correctness.

151 citations


Proceedings ArticleDOI
03 Apr 2006
TL;DR: Immortal DB as mentioned in this paper is a transaction-time database that supports snapshot isolation concurrency control and provides access to prior states of a database by inserting a new record while preserving the old version.
Abstract: Transaction time databases retain and provide access to prior states of a database. An update "inserts" a new record while preserving the old version. Immortal DB builds transaction time database support into a database engine, not in middleware. It supports as of queries returning records current at the specified time. It also supports snapshot isolation concurrency control. Versions are stamped with the "clock times" of their updating transactions. The timestamp order agrees with transaction serialization order. Lazy timestamping propagates timestamps to transaction updates after commit. Versions are kept in an integrated storage structure, with historical versions initially stored with current data. Time-splits of pages permit large histories to be maintained, and enable time based indexing, which is essential for high performance historical queries. Experiments show that Immortal DB introduces little overhead for accessing recent database states while providing access to past states.

75 citations


Journal ArticleDOI
01 May 2006
TL;DR: This work presents a scalable quantum architecture design that employs specialization of the system into memory and computational regions, each individually optimized to match hardware support to the available parallelism.
Abstract: The assumption of maximum parallelism support for the successful realization of scalable quantum computers has led to homogeneous, "sea-of-qubits" architectures. The resulting architectures overcome the primary challenges of reliability and scalability at the cost of physically unacceptable system area. We find that by exploiting the natural serialization at both the application and the physical microarchitecture level of a quantum computer, we can reduce the area requirement while improving performance. In particular we present a scalable quantum architecture design that employs specialization of the system into memory and computational regions, each individually optimized to match hardware support to the available parallelism. Through careful application and system analysis, we find that our new architecture can yield up to a factor of thirteen savings in area due to specialization. In addition, by providing a memory hierarchy design for quantum computers, we can increase time performance by a factor of eight. This result brings us closer to the realization of a quantum processor that can solve meaningful problems.

56 citations


Patent
10 Jan 2006
TL;DR: In this paper, the authors propose a method of serializing and deserializing unknown data types in a strongly typed model, which includes serializing an object to a data stream at first node and communicating the data stream to a second node.
Abstract: A method of serializing and deserializing unknown data types in a strongly typed model. The method includes serializing an object to a data stream at first node and communicating the data stream to a second node. The second node may be another process, machine or a file on a disk. The data stream is deserialized at a later time, and the data types within the data stream are determined. Objects are instantiated in accordance with known data types, and unknown objects are created to retain information related to each unknown data type in the data stream. These unknown objects are used to regenerate the unknown data type when a serialization operation is performed at the second node on an unknown object.

42 citations


01 Jan 2006
TL;DR: The BPEL Repository is an Eclipse plug-in originally built for BPEL business processes and other related XML data that provides a framework for storing, finding and using these documents and can be extended with new types of XML documents.
Abstract: We have published a repository for storing business processes and associated metadata. The BPEL Repository is an Eclipse plug-in originally built for BPEL business processes and other related XML data. It provides a framework for storing, finding and using these documents. Other research prototypes can reuse these features and build on top of it. The repository can easily be extended with new types of XML documents. It provides a Java API for manipulating the XML files as Java objects hiding the serialization and de-serialization from a user. This has the advantage that the user can manipulate the data as more convenient Java objects, although the data is stored as XML files compliant with the standard XML schemas. The data can be queried as Java objects using an object-oriented query language, namely the Object Constraint Language (OCL). Moreover, the flexible design allows the OCL query engine to be replaced with another engine based on other query language.

41 citations


Posted Content
TL;DR: In this paper, the authors present a scalable quantum architecture design that employs specialization of the system into memory and computational regions, each individually optimized to match hardware support to the available parallelism.
Abstract: The assumption of maximum parallelism support for the successful realization of scalable quantum computers has led to homogeneous, ``sea-of-qubits'' architectures. The resulting architectures overcome the primary challenges of reliability and scalability at the cost of physically unacceptable system area. We find that by exploiting the natural serialization at both the application and the physical microarchitecture level of a quantum computer, we can reduce the area requirement while improving performance. In particular we present a scalable quantum architecture design that employs specialization of the system into memory and computational regions, each individually optimized to match hardware support to the available parallelism. Through careful application and system analysis, we find that our new architecture can yield up to a factor of thirteen savings in area due to specialization. In addition, by providing a memory hierarchy design for quantum computers, we can increase time performance by a factor of eight. This result brings us closer to the realization of a quantum processor that can solve meaningful problems.

37 citations


Patent
Ernest S. Bender1
08 Sep 2006
TL;DR: In this article, a method, system, and computer program product for implementing inter-process integrity serialization services is provided, which includes enabling process states including a must-stay-controlled (MSC) state and an extended must stay-controlled state for an invoking process when it is determined that only programs designated as controlled, if any have been loaded for the invoking process.
Abstract: A method, system, and computer program product for implementing inter-process integrity serialization services is provided. The method includes enabling process states including a must-stay-controlled (MSC) state and an extended must-stay-controlled (EMSC) state for an invoking process when it is determined that only programs designated as controlled, if any, have been loaded for the invoking process. The invoking process requests loading of a target program into temporary storage for performing a security service. Based upon a control indicator of the target program, the MSC state, and the EMSC state, the method includes controlling one or more activities within the temporary storage. The activities include loading the target program into the temporary storage, executing a main program in the temporary storage, and resetting the MSC state and the EMSC state across execution of the main program during the lifetime of the invoking process.

32 citations


Patent
11 Aug 2006
TL;DR: In this paper, the authors present a method and system for monitoring and diagnosing the performance of remote method invocations using bytecode instrumentation in distributed multi-tier applications, which involves automated instrumentation of client application bytecode and server application byte code with sensors for measuring performance.
Abstract: Provided is a method and system for monitoring and diagnosing the performance of remote method invocations using bytecode instrumentation in distributed multi-tier applications. The provided method and system involves automated instrumentation of client application bytecode and server application bytecode with sensors for measuring performance of remote method invocations and operations performed during remote method invocations. Performance information is captured for each remote method invocation separately, allowing performance diagnosis of multithreaded execution of remote method invocations, so that throughput and response time information are accurate even when other threads perform remote method invocations concurrently. This makes the present invention suitable for performance diagnosis of remote method invocations on systems under load, such as found in production and load-testing environments. The captured performance metrics include throughput and response time information of remote method invocation, object serialization, and transport. The performance metrics are captured per remote method invocation. The above performance metrics enable developers and administrators to optimize their programming code for performance. Performance metrics are preferably further sent to a processing unit for storage, analysis, and correlation.

31 citations


Patent
07 Apr 2006
TL;DR: In this article, a database of a workflow processing system is migrated from a current version to a new version by serializing the data into serialized objects and then deserializing the objects into the new version of the database.
Abstract: Methods, systems, and apparatus for migrating a database of a workflow processing system from a current version to a new version by serializing the data into serialized objects and then deserializing the objects into the new version of the database. The current version of the database may include elements of data associated with base features and extension features of the workflow processing system. The new version of the database is initially generated to include only base features associated with a new version of the programmed instructions of the system. Deserializing the serialized objects into the new version of the database is effective to merge the object types of the information in the current version of the database into the object type of the new version of the database.

27 citations


Book ChapterDOI
08 May 2006
TL;DR: This paper proposes an OCC/DTA (Optimistic Concurrency Control with Dynamic Timestamp Adjustment) protocol that can be efficiently adapted to mobile computing environments and reduces communication overhead by using client-side validation procedure and enhances transaction throughput by adjusting serialization order without violating transaction semantics.
Abstract: Data broadcasting is an efficient method for disseminating data, and is widely accepted in the database applications of mobile computing environments because of its asymmetric communication bandwidth between a server and mobile clients. This requires new types of concurrency control mechanism to support mobile transactions executed in the mobile clients, which have low-bandwidths toward the server. In this paper, we propose an OCC/DTA (Optimistic Concurrency Control with Dynamic Timestamp Adjustment) protocol that can be efficiently adapted to mobile computing environments. The protocol reduces communication overhead by using client-side validation procedure and enhances transaction throughput by adjusting serialization order without violating transaction semantics. We show that the proposed protocol satisfies data consistency requirements, and simulate that this protocol can improve the performance of mobile transactions in data broadcasting environments.

Patent
Rahul Kapoor1, Rolando Jimenez Salgado1, Kaushik Raj1, Satish R. Thatte1, Xiaoyu Wu1 
20 Nov 2006
TL;DR: In this paper, the authors propose a versioning and concurrency control architecture for data operations on data of a data source by multiple independent clients of a user, where data operation messages between the clients and the data source are intercepted and tracked for serialization control to a data view instance of the source.
Abstract: Versioning and concurrency control architecture of data operations on data of a data source by multiple independent clients of a user. Data operation messages between the clients and the data source are intercepted and tracked for serialization control to a data view instance of the data source. The architecture can be located as an always-on centrally-located system (e.g., mid-tier), accommodate data operations that include create, read, update, delete, and query (CRUDQ) against data sources, and provides support for distributed transactions, locking, versioning, and reliable messaging, for example, for data sources that do not expose such capabilities. A hash is employed for version control and to control changes at the data source. The central system also provides logic for the individual CRUDQ operations, and granular error classification to enable retries whenever possible.

Patent
30 Mar 2006
TL;DR: In this article, it is detected if a component included in a graph of components associated with a user session on a first system has not changed since a prior serialization to a second system.
Abstract: Serialization is disclosed. It is detected if a component included in a graph of components associated with a user session on a first system has not changed since a prior serialization to a second system. A token is sent to the second system during a current serialization, instead of the component, indicating the component has not changed since the prior serialization. De-serialization is disclosed. a token is received at a first system from a second system, in a stream of serialized data from the second system, that indicates that a component on the second system has not changed since a prior serialization. A cached version of the component is retrieved. The cached copy is used to reconstruct on the second system a state of a user session with which the component is associated on the second system.

Patent
12 Jan 2006
TL;DR: In this paper, Dynamically generated code of an exception detecting and recreating (EDR) utility is inserted into the application programming interface (API) entry points to the server to store method call parameter states by either cloning the objects or implementing Java serialization/de-serialization.
Abstract: A method for autonomically detecting and recreating exceptions occurring in a runtime environment during software development/testing. Dynamically-generated code of an exception detecting and recreating (EDR) utility is inserted into the application programming interface (API) entry points to the server to store method call parameter states by either cloning the objects or implementing Java serialization/de-serialization. The runtime listens for exceptions to be thrown and generates a java file that allows the API to be later invoked with the stored parameters for the specific interaction that generated/caused the exception. When the application is stopped, the java files generated are packaged into an application that will run on the server and allow re-execution of the problem paths.

Patent
29 Jul 2006
TL;DR: In this paper, an autonomous and portable smartcard reader device incorporates a high level of embedded security countermeasures, including a light sensor/DTMF/infrared and PIN or other keyboard entry, and at the output through the use of a dual-tone encoder-decoder.
Abstract: An autonomous and portable smartcard reader device incorporates a high level of embedded security countermeasures. Data transfers are encrypted with specific input devices, namely a light sensor/DTMF/infrared and PIN or other keyboard entry, and at the output through the use of a dual-tone encoder-decoder. The unit may be used alone or as a plug-in to another device such as a PDA, cell phone, or remote control. The reader may further be coupled to various biometric or plug-in devices to achieve at least five levels of authentication, namely, (1) the smartcard itself; (2) the smartcard reader; (2) the PIN; (3) private-key cryptography (PKI); and (5) the (optional) biometric device. These five levels account for an extremely strong authentication applicable to public networking on public/private computers, and even on TV (satellite, cable, DVD, CD AUDIO, software applications. Transactions including payments may be carried out without any risk of communication tampering, authentication misconduct or identity theft. In essence, the device is a closed box with communication ports. The emulation of the device is therefore extremely complex due to the fact that it involves PKI, hardware serialization for communication and software implementation, in conjunction with a specific hardware embodiment and service usage infrastructure component that returns a response necessary for each unique transaction link to an atomic time synchronization.

Proceedings ArticleDOI
05 Jul 2006
TL;DR: This work investigates information flow in the presence of non-opaque pointers for an imperative language with records, pointer instructions and exceptions, and develops an information flow aware type system which guarantees noninterference.
Abstract: A common theoretical assumption in the study of information flow security in Java-like languages is that pointers are opaque - i.e., that the only properties that can be observed of pointers are the objects to which they point, and (at most) their equality. These assumptions often fail in practice. For example, various important operations in Java's standard API, such as hashcodes or serialization, might break pointer opacity. As a result, information-flow static analyses which assume pointer opacity risk being unsound in practice, since the pointer representation provides an unchecked implicit leak. We investigate information flow in the presence of non-opaque pointers for an imperative language with records, pointer instructions and exceptions, and develop an information flow aware type system which guarantees noninterference.

Patent
17 Jan 2006
TL;DR: In this paper, a system and method for serialization and/or de-serialization of file system item(s) and associated entity(ies) is provided, which includes an identification component that identifies entities associated with an item and a serialization component that serializes the item and associated entities.
Abstract: A system and method for serialization and/or de-serialization of file system item(s) and associated entity(ies)is provided. A file system “item” comprises a core class which can include property(ies). An item can be simple or compound (e.g., includes other item(s) embedded in it). Associated with an item can be entity(ies) such as fragment(s), link(s) with other item(s) and/or extension(s). Through serialization, a consistent copy of the item and associated entity(ies), if any, can be captured (e.g., for transporting of the item and to reconstruct the item on a destination system). The serialization system includes an identification component that. identities entity(ies) associated with an item and a serialization component that serializes the item and associated entity(ies). The serialization component can further serialize a header that includes information associated with the item and associated entity(ies). The header can facilitate random access to the item and associated entity(ies) (e.g., allowing a reader to interpret/parse only the parts in which it is interested). The serialization system can expose application program interface(s) (API's) that facilitate the copying, moving and/or transfer of an item and its associated entity(ies) from one location to another location.

Patent
30 Jun 2006
TL;DR: In this paper, the authors present a system for processing a packet, including a network interface card (NIC), including a plurality of hardware receive rings, a classifier configured to classify the packet and send the packet to one of the plurality of HRS rings, and a host, operatively connected to the NIC.
Abstract: A system for processing a packet, including a network interface card (NIC), including a plurality of hardware receive rings, a classifier configured to classify the packet and send the packet to one of the plurality of hardware receive rings, and a host, operatively connected to the NIC, including a virtual network stack including a virtual serialization queue, a virtual network interface card (VNIC) associated with the virtual serialization queue, a device driver associated with the VNIC and configured to store a function pointer and a token associated with one of the plurality of hardware receive rings, where the VNIC is configured to perform at least one selected from a group consisting of enabling bypass mode and disabling bypass mode by changing the function pointer stored in the device driver, where the function pointer is used to send the packet to the virtual serialization queue if the bypass mode is enabled.

Book ChapterDOI
26 Mar 2006
TL;DR: It is argued that concurrent processing of queries (reads) and window-slides (writes) is required by data stream systems in order to allow prioritized query scheduling and improve the freshness of answers.
Abstract: Data stream systems execute a dynamic workload of long-running and one-time queries, with the streaming inputs typically bounded by sliding windows. For efficiency, windows may be advanced periodically by replacing the oldest part of the window with a batch of new data. Existing work on stream processing assumes that a window cannot be advanced while it is being accessed by a query. In this paper, we argue that concurrent processing of queries (reads) and window-slides (writes) is required by data stream systems in order to allow prioritized query scheduling and improve the freshness of answers. We prove that the traditional notion of conflict serializability is insufficient in this context and define stronger isolation levels that restrict the allowed serialization orders. We also design and experimentally evaluate a transaction scheduler that efficiently enforces the new isolation levels.

Patent
08 Dec 2006
TL;DR: In this article, an agent component is employed by the agent component to process kernel mode requests from a user mode application when communicating with a storage platform, and re-try components can be provided to facilitate cooperation between the SIAC and the SBAC.
Abstract: An operating system is provided. The system includes an agent component to monitor computer activities between one or more single-item access components (SIAC) and one or more set-based access components (SBAC). An interface component is employed by the agent component to process kernel mode requests from a user mode application when communicating with a storage platform. Re-try components can be provided to facilitate cooperation between the SIAC and the SBAC.

Patent
03 Apr 2006
TL;DR: In this paper, the authors present a method, system, and program product for managing adapter association for a data graph of data objects, which avoids the "overhead" involved with associating and having active adapters during deserialization.
Abstract: The present invention provides a method, system, and program product for managing adapter association for a data graph of data objects. Specifically, under the present invention, a data graph of data objects is generated (e.g., on a server), and then serialized. In performing the serialization, the data graph is translated into bits. In one embodiment, the bits are communicated to a client over a network, and then translated back into the data graph (i.e., deserialized). An adapter is associated with each of the data objects after the data graph is deserialized. This avoids the “overhead” involved with associating and having active adapters during deserialization.

Proceedings ArticleDOI
01 Aug 2006
TL;DR: This work describes an architecture for adaptive Web applications specially suited for mobile devices based on an ongoing standardization effort within World Wide Web Consortium for client side device context access and provides details of a proof-of-concept implementation for DCI.
Abstract: The usage of context data to facilitate adaptive applications is gaining widespread prominence. We describe an architecture for adaptive web applications specially suited for mobile devices. This approach is based on an ongoing standardization effort within World Wide Web Consortium (W3C) for client side device context access. The specification, Delivery Context: Interfaces (DCI) is intended to be used as an access mechanism for context consumers. We augment this approach by providing additional interfaces for context providers towards data provisioning to DCI and usage of a dynamic profile generation mechanism that relies on DCI for data serialization. This complements current approaches such as utilizing User Agent Profiles used in server side adaptation. We also provide details of a proof-of-concept implementation for DCI along with two adaptive applications and outline our future plans for this work.

Proceedings ArticleDOI
09 Dec 2006
TL;DR: To reconcile the seemingly conflicting goals of resource amplification and serialization avoidance, this paper develops three schemes that identify and reject mini-graphs with harmful serialization, including slack-profile, which virtually eliminates serialization-induced slowdowns while providing 34% amplification rates.
Abstract: Instruction aggregation-the grouping of multiple operations into a single processing unit -is a technique that has recently been used to amplify the bandwidth and capacity of critical processor structures. This amplification can be used to improve IPC or to maintain IPC while reducing physical resources. Mini-graph processing is a particular instruction aggregation technique that targets dynamically-scheduled superscalar processors and achieves bandwidth and capacity amplification throughout the pipeline. The dark side of aggregation is serialization. External serialization is an effect common to many aggregation schemes. An aggregate cannot issue until all of its external inputs are ready. If the last-arriving input to an aggregate feeds what is not the first instruction, the entire aggregate can be delayed. Mini-graphs additionally suffer from internal serialization. Serialization can degrade performance, sometimes to the point of overwhelming the benefits of aggregation. This paper examines the problem of serialization and serialization-aware aggregation in the context of mini-graphs. An aggressive mini-graph selection scheme that seeks to maximize amplification, produces amplification rates of 38% but, due to serialization, cannot use them to compensate for a 33% reduction in physical resources (i.e., a reduction from 4-way issue to 3-way issue). A conservative selection scheme that avoids serialization by static inspection produces amplification rates of only 20%, making a performance neutral reduction in resources virtually impossible. To reconcile the seemingly conflicting goals of resource amplification and serialization avoidance, this paper develops three schemes that identify and reject mini-graphs with harmful serialization. The most effective of these, Slack-Profile, uses local slack profiles to reject mini-graphs whose estimated delay cannot be absorbed by the rest of the program. Slack- Profile virtually eliminates serialization-induced slowdowns while providing 34% amplification rates. A 3-way issue processor augmented with Slack-Profile mini-graphs outperforms a 4-way issue processor by an average of 2%.

Patent
Behl Stefan1, Carsten Leue1, Falk Posch1
18 Oct 2006
TL;DR: In this article, a complete stream-based serialization divided into two sub-processes which are both streambased is presented. But, the stream based serialization is not parallelized.
Abstract: The present invention provides a method, System, and Computer program product for efficiently serializing navigational State into URLs or the header of the new portal page by using a complete stream-based serialization divided into two sub-processes which are both stream-based. The first stream-based serialization sub-process which is hierarchy-oriented uses the hierarchical object representation of the navigational State and transforms it into a series of events. At the end of the sub-process the compacted navigational State information carried by the received events is transformed into a character-based representation and the hierarchical structure of the navigational State is derived from the order of the received events and transformed into an additional character-based representation both being directly streamed to the second sub- process. The second stream-based serialization sub-process which is hierarchy-independent uses the result of the first sub-process and applies further compression and character encoding strategies and finally streams the compressed and character encoded information into a URL or header of said new Portal page . Both sub-processes are seamlessly linked together.

Journal ArticleDOI
TL;DR: The work described in this paper solves one key problem for static analysis of RMI programs and provides a starting point for future work on improving the understanding, testing, verification, and performance of R MI-based software.
Abstract: Distributed applications provide numerous advantages related to software performance, reliability, interoperability, and extensibility. This paper focuses on distributed Java programs built with the help of the remote method invocation (RMI) mechanism. We consider points-to analysis for such applications. Points-to analysis determines the objects pointed to by a reference variable or a reference object field. Such information plays a fundamental role as a prerequisite for many other static analyses. We present the first theoretical definition of points-to analysis for RMI-based Java applications, and we present an algorithm for implementing a flow- and context-insensitive points-to analysis for such applications. We also discuss the use of points-to information for corrupting call graph information, for understanding data dependencies due to remote memory locations, and for identifying opportunities for improving the performance of object serialization at remote calls. The work described in this paper solves one key problem for static analysis of RMI programs and provides a starting point for future work on improving the understanding, testing, verification, and performance of RMI-based software

Proceedings ArticleDOI
05 Oct 2006
TL;DR: This paper introduces operation commutativity as a key principle in designing operations in order to manage distributed replicas consistent, and suggests effective schemes that make operations commutative using the relations of objects and operations.
Abstract: As collaboration over the Internet becomes an everyday affair, it is increasingly important to provide high quality of interactivity. Distributed applications can replicate collaborative objects at every site for the purpose of achieving high interactivity. Replication, however, has a fatal weakness that it is difficult to maintain consistency among replicas. This paper introduces operation commutativity as a key principle in designing operations in order to manage distributed replicas consistent. In addition, we suggest effective schemes that make operations commutative using the relations of objects and operations. Finally, we apply our approaches to some simple replicated abstract data types, and achieve their consistency without serialization and locking.

Patent
Cozianu Costin1
01 Jun 2006
TL;DR: In this paper, a serialization library is used to store sensitive or confidential information in XML-based systems, where fields into which such information is received are tagged with a type identifier representative of the sensitive and confidential nature of associated fields.
Abstract: Sensitive or confidential information is received in to a serialization library where it is associated with one or more fields. Fields into which such information is received are tagged with a type identifier representative of the sensitive or confidential nature of associated field content. Information thus tagged is then automatically encrypted with either a process associated key or a session associated key. Encrypted messages are then communicated to an associated web service or message service. Such encryption is particularly useful in automatically encrypting confidential information in XML-based systems.

Patent
Dan F. Greiner1, Donald W. Schmidt1
03 May 2006
TL;DR: In this paper, a compare, swap and store facility is provided that does not require external serialization, and a compare and swap operation is performed using an interlocked update operation.
Abstract: A compare, swap and store facility is provided that does not require external serialization. A compare and swap operation is performed using an interlocked update operation. If the comparison indicates equality, a store operation is performed. The compare, swap and store operations are performed as a single unit of operation.

Patent
14 Jul 2006
TL;DR: In this article, a method for managing serialization of ELECTRONIC PRODUCT CODES (EPCs) is presented, which can include a step of identifying a software system for managing Tag Data Specification (TDS) compliant EPCs.
Abstract: The present invention includes a method for managing serialization of ELECTRONIC PRODUCT CODES (EPCs). The method can include a step of identifying a software system for managing Tag Data Specification (TDS) compliant EPCs. The software system can include a database containing two or more related tables. A tuple can be included for each unique nonserialized portion of an EPC ID URN. The database can utilize the nonserialized portion to manage a serialized portion of the EPC. In one embodiment, the database can use the nonserialized portion of an EPC to automatically generate the serialized portion of the EPC. Different sets of sequentially increasing (or sequentially decreasing) serial numbers (that are assigned to the associated unique nonserialized portions of the EPCs) can be associated with different nonserialized values.

Journal ArticleDOI
TL;DR: Evaluation results indicate that the use of bus serialization can reduce bus power consumption by 30% in the 45nm technology process.
Abstract: On-chip interconnects are becoming a major power consumer in scaled VLSI design. Consequently, bus power reduction has become effective for total power reduction on chip multiprocessors and system-on-a-chip requiring long interconnects as buses. In this paper, we advocate the use of bus serialization to reduce bus power consumption. Bus serialization decreases the number of wires and increases the pitch between the wires. The wider pitch decreases the coupling capacitances of the wires, and consequently reduces bus power consumption. Evaluation results indicate that our technique can reduce bus power consumption by 30% in the 45nm technology process.