scispace - formally typeset
Search or ask a question

Showing papers on "Serialization published in 2012"


Proceedings ArticleDOI
Kazuaki Maeda1
16 May 2012
TL;DR: This paper compares twelve libraries of object serialization from qualitative and quantitative aspects to show that there is no best solution and each library makes good in the context it was developed.
Abstract: This paper compares twelve libraries of object serialization from qualitative and quantitative aspects. Those are object serialization in XML, JSON and binary formats. Using each library, a common example is serialized to a file. The size of the serialized file and the processing time are measured during the execution to compare all object serialization libraries. Some libraries show the performance penalty. But it is clear that there is no best solution. Each library makes good in the context it was developed.

127 citations


Proceedings ArticleDOI
13 Aug 2012
TL;DR: McNettle is an extensible SDN control system whose control event processing throughput scales with the number of system CPU cores and which supports control algorithms requiring globally visible state changes occurring at flow arrival rates.
Abstract: Software defined networking (SDN) introduces centralized controllers to dramatically increase network programmability. The simplicity of a logical centralized controller, however, can come at the cost of control-plane scalability. In this demo, we present McNettle, an extensible SDN control system whose control event processing throughput scales with the number of system CPU cores and which supports control algorithms requiring globally visible state changes occurring at flow arrival rates. Programmers extend McNettle by writing event handlers and background programs in a high-level functional programming language extended with shared state and memory transactions. We implement our framework in Haskell and leverage the multicore facilities of the Glasgow Haskell Compiler (GHC) and runtime system. Our implementation schedules event handlers, allocates memory, optimizes message parsing and serialization, and reduces system calls in order to optimize cache usage, OS processing, and runtime system overhead. Our experiments show that McNettle can serve up to 5000 switches using a single controller with 46 cores, achieving throughput of over 14 million flows per second, near-linear scaling up to 46 cores, and latency under 200 μs for light loads and 10 ms with loads consisting of up to 5000 switches.

119 citations


Book
10 Oct 2012
TL;DR: Hadoop in Practice collects 85 Hadoop examples and presents them in a problem/solution format, each addressing a specific task you'll face, like querying big data using Pig or writing a log file loader.
Abstract: SummaryHadoop in Practice collects 85 Hadoop examples and presents them in a problem/solution format. Each technique addresses a specific task you'll face, like querying big data using Pig or writing a log file loader. You'll explore each problem step by step, learning both how to build and deploy that specific solution along with the thinking that went into its design. As you work through the tasks, you'll find yourself growing more comfortable with Hadoop and at home in the world of big data. About the TechnologyHadoop is an open source MapReduce platform designed to query and analyze data distributed across large clusters. Especially effective for big data systems, Hadoop powers mission-critical software at Apple, eBay, LinkedIn, Yahoo, and Facebook. It offers developers handy ways to store, manage, and analyze data. About the BookHadoop in Practice collects 85 battle-tested examples and presents them in a problem/solution format. It balances conceptual foundations with practical recipes for key problem areas like data ingress and egress, serialization, and LZO compression. You'll explore each technique step by step, learning how to build a specific solution along with the thinking that went into it. As a bonus, the book's examples create a well-structured and understandable codebase you can tweak to meet your own needs.This book assumes the reader knows the basics of Hadoop. Purchase of the print book comes with an offer of a free PDF, ePub, and Kindle eBook from Manning. Also available is all code from the book. What's InsideConceptual overview of Hadoop and MapReduce 85 practical, tested techniques Real problems, real solutions How to integrate MapReduce and RTable of ContentsPART 1 BACKGROUND AND FUNDAMENTALS Hadoop in a heartbeat PART 2 DATA LOGISTICS Moving data in and out of Hadoop Data serialization?working with text and beyond PART 3 BIG DATA PATTERNSApplying MapReduce patterns to big data Streamlining HDFS for big dataDiagnosing and tuning performance problems PART 4 DATA SCIENCE Utilizing data structures and algorithms Integrating R and Hadoop for statistics and more Predictive analytics with Mahout PART 5 TAMING THE ELEPHANT Hacking with Hive Programming pipelines with PigCrunch and other technologies Testing and debugging

115 citations


Proceedings ArticleDOI
20 Feb 2012
TL;DR: This paper compares four different data serialization formats with an emphasis on serialization speed, data size, and usability, and selects XML, JSON, Thrift, and ProtoBuf.
Abstract: Because of the increase in easily obtainable internet-connected mobile devices and their unique characteristics, choosing the proper data serialization format has become increasingly difficult. These devices are resource scarce and bandwidth limited. In this paper, we compare four different data serialization formats with an emphasis on serialization speed, data size, and usability. The selected serialization formats include XML, JSON, Thrift, and ProtoBuf. XML and JSON are the most well known text-based data formats, while ProtoBuf and Thrift are relatively new binary serialization formats. These data serialization formats are tested on an Android device using both text-heavy and number-heavy objects.

92 citations


Proceedings ArticleDOI
25 Jun 2012
TL;DR: Rootbeer is a project that allows developers to simply write code in Java and the (de)serialization, kernel code generation and kernel launch is done automatically, in contrast to Java language bindings for CUDA or OpenCL where the developer still has to do these things manually.
Abstract: When converting a serial program to a parallel program that can run on a Graphics Processing Unit (GPU) the developer must choose what functions will run on the GPU. For each function the developer chooses, he or she needs to manually write code to: 1) serialize state to GPU memory, 2) define the kernel code that the GPU will execute, 3) control the kernel launch and 4) deserialize state back to CPU memory. Rootbeer is a project that allows developers to simply write code in Java and the (de)serialization, kernel code generation and kernel launch is done automatically. This is in contrast to Java language bindings for CUDA or OpenCL where the developer still has to do these things manually. Rootbeer supports all features of the Java Programming Language except dynamic method invocation, reflection and native methods. The features that are supported are: 1) single and multi-dimensional arrays of primitive and reference types, 2) composite objects, 3) instance and static fields, 4) dynamic memory allocation, 5) inner classes, 6) synchronized methods and monitors, 7) strings and 8) exceptions that are thrown or caught on the GPU. Rootbeer is the most full-featured tool to enable GPU computing from within Java to date. Rootbeer is highly tested. We have 21k lines of product code and 6.5k lines of test cases that all pass on both Windows and Linux. We have created 3 performance example applications with results ranging from 3X slow-downs to 100X speed-ups. Rootbeer is free and open-source software licensed under the GNU General Public License version 3 (GPLv3).

78 citations


Book ChapterDOI
27 May 2012
TL;DR: This paper shows how to enhance the exchanged HDT with additional structures to support some basic forms of SPARQL query resolution without the need of "unpacking" the data.
Abstract: Huge RDF datasets are currently exchanged on textual RDF formats, hence consumers need to post-process them using RDF stores for local consumption, such as indexing and SPARQL query. This results in a painful task requiring a great effort in terms of time and computational resources. A first approach to lightweight data exchange is a compact (binary) RDF serialization format called HDT . In this paper, we show how to enhance the exchanged HDT with additional structures to support some basic forms of SPARQL query resolution without the need of "unpacking" the data. Experiments show that i) with an exchanging efficiency that outperforms universal compression, ii) post-processing now becomes a fast process which iii) provides competitive query performance at consumption.

69 citations


Journal ArticleDOI
TL;DR: A unified view of the research efforts aimed at SOAP performance enhancement is provided, covering almost every phase of SOAP processing, ranging over message parsing, serialization, deserialization, compression, multicasting, security evaluation, and data/instruction-level processing.
Abstract: The web services (WS) technology provides a comprehensive solution for representing, discovering, and invoking services in a wide variety of environments, including Service Oriented Architectures (SOA ) and grid computing systems. At the core of WS technology lie a number of XML-based standards, such as the Simple Object Access Protocol (SOAP), that have successfully ensured WS extensibility, transparency, and interoperability. Nonetheless, there is an increasing demand to enhance WS performance, which is severely impaired by XML's verbosity. SOAP communications produce considerable network traffic, making them unfit for distributed, loosely coupled, and heterogeneous computing environments such as the open Internet. Also, they introduce higher latency and processing delays than other technologies, like Java RMI and CORBA. WS research has recently focused on SOAP performance enhancement. Many approaches build on the observation that SOAP message exchange usually involves highly similar messages (those created by the same implementation usually have the same structure, and those sent from a server to multiple clients tend to show similarities in structure and content). Similarity evaluation and differential encoding have thus emerged as SOAP performance enhancement techniques. The main idea is to identify the common parts of SOAP messages, to be processed only once, avoiding a large amount of overhead. Other approaches investigate nontraditional processor architectures, including micro- and macrolevel parallel processing solutions, so as to further increase the processing rates of SOAP/XML software toolkits. This survey paper provides a concise, yet comprehensive review of the research efforts aimed at SOAP performance enhancement. A unified view of the problem is provided, covering almost every phase of SOAP processing, ranging over message parsing, serialization, deserialization, compression, multicasting, security evaluation, and data/instruction-level processing.

61 citations


Proceedings ArticleDOI
11 Jun 2012
TL;DR: An approach for synthesizing data representations for concurrent programs that takes as input a program written using concurrent relations and synthesizes a representation of the relations as sets of cooperating data structures as well as the placement and acquisition of locks to synchronize concurrent access to those data structures.
Abstract: We describe an approach for synthesizing data representations for concurrent programs. Our compiler takes as input a program written using concurrent relations and synthesizes a representation of the relations as sets of cooperating data structures as well as the placement and acquisition of locks to synchronize concurrent access to those data structures. The resulting code is correct by construction: individual relational operations are implemented correctly and the aggregate set of operations is serializable and deadlock free. The relational specification also permits a high-level optimizer to choose the best performing of many possible legal data representations and locking strategies, which we demonstrate with an experiment autotuning a graph benchmark.

59 citations


Proceedings ArticleDOI
01 Oct 2012
TL;DR: Streaming HDT, a lightweight serialization format for RDF documents that allows for transmitting compressed documents with minimal effort for the encoding, is introduced, tailored for typical IoT applications where the embedded devices are often senders and seldom receivers of complete documents.
Abstract: We present the platform-independent Wiselib RDF Provider for embedded IoT devices such as sensor nodes. It enables the devices to act as semantic data providers. They can describe themselves, including their services, sensors, and capabilities, by means of RDF documents. Used in a protocol stack that provides Internet connectivity (6LowPAN) and a service layer (CoAP), a sensor can autoconfigure itself, connect to the Internet, and provide Linked Data without manual intervention. We introduce Streaming HDT, a lightweight serialization format for RDF documents that allows for transmitting compressed documents with minimal effort for the encoding; this is tailored for typical IoT applications where the embedded devices are often senders and seldom receivers of complete documents.

49 citations


Patent
Pavandeep Kalra1, Sergery Boldyrev1
29 Jun 2012
TL;DR: In this article, an approach for facilitating real-time execution of computations of data based on context information upon collection, storage, retrieval, or use of the data is presented.
Abstract: An approach is provided for facilitating real-time execution of computations of data based on context information upon collection, storage, retrieval or use of the data. A data analysis platform determines context information associated with one or more data items stored at one or more nodes. It also determines one or more computations for processing the one or more data items based on the context information. A serialization of the one or more computations or corresponding context information is associated with the one or more data items for processing by the one or more nodes.

30 citations


Proceedings ArticleDOI
21 May 2012
TL;DR: This work defines the problem of bank and value conflict optimization for data processing operators using the CUDA platform and uses two database operations: foreignkey join and grouped aggregation to analyze the impact of these two factors on operator performance.
Abstract: Implementations of database operators on GPU processors have shown dramatic performance improvement compared to multicore-CPU implementations. GPU threads can cooperate using shared memory, which is organized in interleaved banks and is fast only when threads read and modify addresses belonging to distinct memory banks. Therefore, data processing operators implemented on a GPU, in addition to contention caused by popular values, have to deal with a new performance limiting factor: thread serialization when accessing values belonging to the same bank.Here, we define the problem of bank and value conflict optimization for data processing operators using the CUDA platform. To analyze the impact of these two factors on operator performance we use two database operations: foreignkey join and grouped aggregation. We suggest and evaluate techniques for optimizing the data arrangement offline by creating clones of values to reduce overall memory contention. Results indicate that columns used for writes, as grouping columns, need be optimized to fully exploit the maximum bandwidth of shared memory.

Book ChapterDOI
23 Sep 2012
TL;DR: This work extracted the send/recv-buffer access pattern of a representative set of scientific applications into micro-applications that isolate their data access patterns and found significant performance discrepancies between state-of-the-art MPI implementations.
Abstract: Data is often communicated from different locations in application memory and is commonly serialized (copied) to send buffers or from receive buffers. MPI datatypes are a way to avoid such intermediate copies and optimize communications, however, it is often unclear which implementation and optimization choices are most useful in practice. We extracted the send/recv-buffer access pattern of a representative set of scientific applications into micro-applications that isolate their data access patterns. We also observed that the buffer-access patterns in applications can be categorized into three different groups. Our micro-applications show that up to 90% of the total communication time can be spent with local serialization and we found significant performance discrepancies between state-of-the-art MPI implementations. Our micro-applications aim to provide a standard benchmark for MPI datatype implementations to guide optimizations similarly to SPEC CPU and the Livermore loops do for compiler optimizations.

Patent
04 Sep 2012
TL;DR: In this article, a web service is used to send user-specified GPU kernel functions and input data sets over a Web service to a remote computer equipped with a programmable GPU for execution.
Abstract: User-specified GPU kernel functions and input data sets are sent over a Web service to a remote computer equipped with a programmable GPU (Graphics Processing Unit) for execution. The Web service then returns resulting data to a client, which uses the same Web service. This is accomplished by incorporating a serialized request formed from the GPU kernel function code and input data set by using JavaScript® Object Notation (JSON) serialization. The request is then sent to the remote computer and programmable GPU, where the request is deserialized, kernel code is compiled, and input data copied to the GPU memory on the remote computer. The GPU kernel function is then executed, and output data is copied from the GPU memory on the remote computer and reserialized using JSON to form a serialized response. The serialized response is then returned to the client via the web service.

Journal ArticleDOI
TL;DR: This work investigates how serializing CM influences the performance of STM systems and implements adaptive algorithms that control the activation of serialization CM according to measured contention level, based on a novel low-overhead serialization mechanism.

Patent
Kirk J. Krauss1
26 Mar 2012
TL;DR: In this paper, a computer-implemented method of performing runtime analysis on and control of a multithreaded computer program is presented. But it does not address the control of the identified threads.
Abstract: A computer-implemented method of performing runtime analysis on and control of a multithreaded computer program. One embodiment of the present invention can include identifying threads of a computer program to be analyzed. With a supervisor thread, execution of the identified threads can be controlled and execution of the identified threads can be monitored to determine a status of the identified threads. An indicator corresponding to the determined status of the threads can be output.

Proceedings ArticleDOI
24 Sep 2012
TL;DR: A C++ template mechanism enabled generic parallel programming skeleton for remote sensing applications in high performance clusters that provides both programming templates for distributed RS data and generic parallel skeletons for RS algorithms.
Abstract: Remote Sensing (RS) data processing is characterized by massive remote sensing images and increasing amount of algorithms of higher complexity. Parallel programming for data-intensive applications like massive remote sensing image processing on parallel systems is bound to be especially trivial and challenging. We propose a C++ template mechanism enabled generic parallel programming skeleton for these remote sensing applications in high performance clusters. It provides both programming templates for distributed RS data and generic parallel skeletons for RS algorithms. Through one-side communication primitives provided by MPI, the distributed RS data template could provide a global view of the big RS data whose sliced data blocks are scattered among the distributed memory of cluster nodes. Moreover, by data serialization and RMA (Remote Memory Access), the data templates could also offer a simple and effective way to distribute and communicate massive remote sensing data with complex data structures. Furthermore, the generic parallel skeletons implement the recurring patterns of computation, performance optimization and pass the user-defined sequential functions as parameters of templates for type genericity. With the implemented skeletons, Developers without extensive parallel computing technologies can implement efficient parallel remote sensing programs without concerning for parallel computing details. Through experiments on remote sensing applications, we confirmed that our templates were productive and efficient.

Proceedings ArticleDOI
10 Dec 2012
TL;DR: Two of the existing approaches to 3-D MOS, a brush and a lasso, as well as a new technique, a magic wand, which automatically selects objects based on local proximity to other objects, are implemented and tested.
Abstract: The task of multiple object selection (MOS) in immersive virtual environments is important and still largely unexplored. The difficulty of efficient MOS increases with the number of objects to be selected. E.g. in small-scale MOS, only a few objects need to be simultaneously selected. This may be accomplished by serializing existing single-object selection techniques. In this paper, we explore various MOS tools for large-scale MOS. That is, when the number of objects to be selected are counted in hundreds, or even thousands. This makes serialization of single-object techniques prohibitively time consuming. Instead, we have implemented and tested two of the existing approaches to 3-D MOS, a brush and a lasso, as well as a new technique, a magic wand, which automatically selects objects based on local proximity to other objects. In a formal user evaluation, we have studied how the performance of the MOS tools are affected by the geometric configuration of the objects to be selected. Our studies demonstrate that the performance of MOS techniques is very significantly affected by the geometric scenario facing the user. Furthermore, we demonstrate that a good match between MOS tool shape and the geometric configuration is not always preferable, if the applied tool is complex to use.

Book ChapterDOI
05 Nov 2012
TL;DR: This paper proposes an approach using model-based techniques for improving component reusability through the development of a generic meta-model capable of representing data types from different frameworks and their relations and describes requirements on robotics frameworks to further increase the level of interoperability between available components.
Abstract: The emerging availability of high-quality software repositories for robotics promises to speed up the construction process of robotic systems through systematic reuse of software components. However, to reuse components without modification, compatibility at the interface level needs to be created, which is particularly hard if components were implemented in different robotic frameworks. In this paper we propose an approach using model-based techniques for improving component reusability. We specifically address data type compatibility in a structured way through the development of a generic meta-model capable of representing data types from different frameworks and their relations. Based on this model a code generator emits serialization code which makes it possible to seamlessly reuse the existing data types of different frameworks. The application of this approach is exemplified by connecting the YARP-based iCub simulation with a component architecture using a current robotics middleware. Based on our experiences we describe requirements on robotics frameworks to further increase the level of interoperability between available components.

Patent
29 Jun 2012
TL;DR: In this paper, object serialization is used to communicate references to shim objects and manage memory on worker processes of a distributed software application, where the shim object is configured to store a reference to an instance of a native partitioned array.
Abstract: Embodiments are directed to using object serialization to communicate references to shim objects and to managing memory on worker processes of a distributed software application. In one scenario, a computer system instantiates shim objects on one or more ranks of a distributed application. The shim objects are configured to store a reference to an instance of a native partitioned array, where the reference includes a unique identifier for the native partitioned array instance. The computer system then serializes the shim objects for communication of the stored references from the master rank of the distributed application to various other worker ranks of the distributed application. Then, upon serializing the shim objects, the computer system communicates the shim object's stored references to the other worker ranks of the distributed application.

Patent
18 Jul 2012
TL;DR: In this article, a method for storing and reading list data, comprising the following steps of list data storing and list data reading, was proposed, wherein the list data stored step comprises the following three steps of: receiving list data submitted by a user, converting list data into a JAVA object, serializing the list object into an XML (Extensible Markup Language) character string by using an open source item XStream, compressing the XML character string into byte arrays, and storing the byte arrays into a databank, and finally decoding the byte array into the
Abstract: The invention discloses a method for storing and reading list data, comprising the following steps of list data storing and list data reading, wherein the list data storing step comprises the following steps of: receiving list data submitted by a user, converting the list data into a JAVA object, serializing the JAVA object into an XML (Extensible Markup Language) character string by using an open source item XStream, compressing the XML character string into byte arrays, and storing the byte arrays into a databank; the list data reading step comprises the following steps of: reading the bytearrays from the databank; decompressing the byte arrays into the XML character string; deserializing the XML character string into the JAVA object by using the open source item XStream, and converting the JAVA object into the list data. In the invention, a list is used as a whole, after JAVA is objectified, the JAVA object is serialized into the XML character string by using the open source item,and the XML character string supports the storage of various databanks and is simple to store.

Patent
Robert Matthew Aman1
26 Jul 2012
TL;DR: In this article, a process for serializing and deserializing instance data from a schema is disclosed, where a schema can be used to automatically and dynamically generate classes and methods.
Abstract: A process for serializing and deserializing instance data from a schema is disclosed. A schema can be used to automatically and dynamically generate classes and methods. First, the raw schema may be parsed into an intermediate data structure consisting of pairs representing object properties and attributes of the properties. Then, an exemplary process generates new parser classes and methods by iterating over the intermediate data structure's keys and generating classes or class variables based on the property type. Accessors and mutators are generated for each class variable. Additionally, a serialization method and a constructor method are generated for each class. These classes and methods are stored in memory and can be used by a host programming language to transmit, receive, and manipulate data to or from an API.

Patent
01 Aug 2012
TL;DR: In this paper, a method and a system for processing a network service request is presented, which comprises the following steps that: a client serializes an object of a request to be transmitted to a background server, and transmits a request taking a request character string obtained by the serialization as a parameter to the background server.
Abstract: The invention discloses a method and a system for processing a network service request. The method comprises the following steps that: a client serializes an object of a request to be transmitted to a background server, and transmits a request taking a request character string obtained by the serialization as a parameter to the background server; the background server receives the request from the client, and de-serializes the request character string in the received request to obtain a corresponding object; the background server acquires an object to be fed back on the basis of the corresponding object, and serializes the object to be fed back to obtain a feedback character string; and the client receives the feedback character string, and de-serializes the feedback character string to obtain corresponding object information. By the method, the operation of the client and/or the background server is simplified.

Patent
19 Sep 2012
TL;DR: In this article, a method and a system for tracking user behaviors on mobile equipment is presented, where an input and output interface layer probe records actions when application software invokes input/output interface layer functions; parameters transmitted into the input/outline interfaces by the application software are recorded; and an environment information recording module records environment information at regular time.
Abstract: The invention discloses a method and a system for tracking user behaviors on mobile equipment. An input and output interface layer probe records actions when application software invokes input and output interface layer functions; parameters transmitted into the input and output interface layer functions by the application software are recorded; an environment information recording module records environment information at regular time; actions when the application software invokes the input and output interface layer functions, the parameters transmitted into the input and output interface layer functions by the application software and the environment information are converged into meta information, and in addition, the meta information is transmitted to a meta information analysis system; the meta information is interpreted and analyzed, the functional logic serialization information of the application software used by the users is generated and is sent to a user behavior analysis and statistical system; and then, the user behaviors are subjected to statistical analysis according to the serialization information. When the technical scheme disclosed by the invention is adopted, accurate statistical analysis data can be provided for mastering the operation condition of the application software on the mobile equipment, the user behavior and the like.

Proceedings ArticleDOI
10 Jul 2012
TL;DR: In this article, the authors present an unexpected problem which can occur in dependency-driven task parallelization models like StarSs: the tasks accessing a specific spatial domain are treated as interdependent, as dependencies are detected automatically via memory addresses.
Abstract: Spatial decomposition is a popular basis for parallelising code. Cast in the frame of task parallelism, calculations on a spatial domain can be treated as a task. If neighbouring domains interact and share results, access to the specific data needs to be synchronized to avoid race conditions. This is the case for a variety of applications, like most molecular dynamics and many computational fluid dynamics codes. Here we present an unexpected problem which can occur in dependency-driven task parallelization models like StarSs: the tasks accessing a specific spatial domain are treated as interdependent, as dependencies are detected automatically via memory addresses. Thus, the order in which tasks are generated will have a severe impact on the dependency tree. In the worst case, a complete serialization is reached and no two tasks can be calculated in parallel. We present the problem in detail based on an example from molecular dynamics, and introduce a theoretical framework to calculate the degree of serialization. Furthermore, we present strategies to avoid this unnecessary problem. We recommend treating these strategies as best practice when using dependency-driven task parallel programming models like StarSs on such scenarios.

Patent
11 Jul 2012
TL;DR: In this article, a cross-platform communication system is proposed for unifying conversion standards between objects of the different platforms and a binary stream, which runs on clients of different platforms.
Abstract: The invention provides a cross-platform communication method and a cross-platform communication system. The communication system runs on clients of different platforms, and is used for unifying conversion standards between objects of the different platforms and a binary stream. The communication system comprises an object resolution module, wherein the object resolution module is used for conversion between the objects of the different platforms and the binary stream. The communication method comprises the following steps that: a first platform performs object serialization to convert a constructed object into the binary stream according to the conversion standard; the binary stream is transmitted to a second platform through a network; and the second platform performs object deserialization to convert the received binary stream into an object usable for the second platform according to the conversion standard. The problem of difference between the different platforms is solved, the objects can be conveniently transmitted and recovered, and communication between the different platforms is facilitated.

Journal Article
Guo Wei1
TL;DR: Taking design knowledge resource combination optimization of mold SMEs as examples, the optimal designknowledge resource serialization combination under the constraints of cost and quality was solved according to mathematical model.

Proceedings ArticleDOI
15 Feb 2012
TL;DR: This work proposes Dynamic Serialization (DS) as a new technique to improve energy consumption without degrading performance in applications with conflicting transactions, which is implemented on top of a hardware transactional memory system with an eager conflict management policy.
Abstract: In the search for new paradigms to simplify multithreaded programming, Transactional Memory (TM) is currently being advocated as a promising alternative to deadlock-prone lock-based synchronization. In this way, future many-core CMP architectures may need to provide hardware support for TM. On the other hand, power dissipation constitutes a first class consideration in multicore processor designs. In this work, we propose Dynamic Serialization (DS) as a new technique to improve energy consumption without degrading performance in applications with conflicting transactions. Our proposal, which is implemented on top of a hardware transactional memory system with an eager conflict management policy, detects and serializes conflicting transactions dynamically. Particularly, in case of conflict one transaction is allowed to continue whilst the rest are completely stalled. Once the executing transaction has finished it wakes up several of the stalling transactions. This brings important benefits in terms of energy consumption due to the reduction in the amount of wasted work that DS implies. Results for a 16-core CMP show that Dynamic Serialization obtains reductions of 10% on average in energy consumption (more than 20% in high contention scenarios) without affecting, on average, execution time.

Patent
13 Jan 2012
TL;DR: In this paper, one or more entities within the serialized structure may be deserialized for a second client based upon evaluating the second client against the set of deserialization permissions to determine which entities the second clients has permission to access.
Abstract: Among other things, one or more techniques and/or systems are provided for controlling the serialization of data into a serialized structure and/or the deserialization of data from the serialized structure. That is, a first client may request serialization of data comprising one or more entities. Entities that the first client has permission to serialize may be serialized for inclusion within a serialized structure, which may be encrypted. A set of deserialization permissions specifying which entities may be accessed by which clients may be defined for the serialized structure. In this way, one or more entities within the serialized structure may be deserialized for a second client based upon evaluating the second client against the set of deserialization permissions to determine which entities the second client has permission to access. The serialized structure may otherwise remain encrypted to provide sustained protection of serialized data comprised therein.

Patent
28 Aug 2012
TL;DR: In this paper, the authors present a method to store a graphical user interface rendered by an instance of a production data management system framework in conjunction with data from at least one data source.
Abstract: A method can include providing an object that represents at least selected menu items that contextualize a graphical user interface rendered by an instance of a production data management system framework in conjunction with data from at least one data source; receiving a request to store the contextualized graphical user interface; responsive to the request, serializing the object to mark-up language; and storing the mark-up language as a file to a data storage device, the file configured for subsequent deserializing of the mark-up language for generating a copy of the object and for rendering of the contextualized graphical user interface according to the copy of the object. Various other apparatuses, systems, methods, etc., are also disclosed.