scispace - formally typeset
Search or ask a question

Showing papers on "Data management published in 2008"


Journal ArticleDOI
TL;DR: A review of the academic and popular talent management literatures can be found in this article, where the authors clarify what is meant by talent management and why it is important (particularly with respect to its affect on employee recruitment, retention and engagement).
Abstract: Purpose – The purpose of this article is to clarify what is meant by talent management and why it is important (particularly with respect to its affect on employee recruitment, retention and engagement), as well as to identify factors that are critical to its effective implementation. Design/methodology/approach – This article is based on a review of the academic and popular talent management literatures. Findings – Talent management is an espoused and enacted commitment to implementing an integrated, strategic and technology enabled approach to human resource management (HRM). This commitment stems in part from the widely shared belief that human resources are the organization's primary source of competitive advantage; an essential asset that is becoming in increasingly short supply. The benefits of an effectively implemented talent management strategy include improved employee recruitment and retention rates, and enhanced employee engagement. These outcomes in turn have been associated with improved operational and financial performance. The external and internal drivers and restraints for talent management are many. Of particular importance is senior management understanding and commitment. Practical implications – Hospitality organizations interested in implementing a talent management strategy would be well advised to: define what is meant by talent management; ensure CEO commitment; align talent management with the strategic goals of the organization; establish talent assessment, data management and analysis systems; ensure clear line management accountability; and conduct an audit of all HRM practices in relation to evidence‐based best practices. Originality/value – This article will be of value to anyone seeking to better understand talent management or to improve employee recruitment, retention and engagement.

519 citations


Journal ArticleDOI

259 citations


Journal ArticleDOI
TL;DR: The research found several key concepts behind the Japanese approach to 5S management, which link 5S to aspects of Japanese management approach, which are aligned to an integrated management system rather than a simple management tool or technique.
Abstract: Purpose – Building on previous studies of the managerial application and development of the 5S concept (5S), this research aims to identify and present key concepts of 5S from a Japanese management perspective. These findings link 5S to aspects of Japanese management approach, which are aligned to an integrated management system rather than a simple management tool or technique.Design/methodology/approach – Data were collected from Japanese companies that use 5S as a core management approach and use their organisational web sites to disseminate information in regard to this practise. The data were examined by the use of computer‐aided lexical analysis (Leximancer), which provided an insight into the nature of 5S within the original Japanese context.Findings – The research found several key concepts behind the Japanese approach to 5S management. These findings demonstrate the importance of both the technical (visible) and philosophical (invisible) approaches required for each of the 5S components and are d...

257 citations


Journal ArticleDOI
TL;DR: In this article, the main contributions of human resource dimensions for the environmental management in a company are discussed, and a model that analyses the relationships between these dimensions and the typical phases of an environmental management system is presented, within a perspective of application for academicians and managers.

255 citations


Journal ArticleDOI
TL;DR: This methodological paper addresses practical strategies, implications, benefits and drawbacks of collecting qualitative semi-structured interview data about Internet-based research topics using four different interaction systems: face to face; telephone; email; and instant messaging.
Abstract: This methodological paper addresses practical strategies, implications, benefits and drawbacks of collecting qualitative semi-structured interview data about Internet-based research topics using four different interaction systems: face to face; telephone; email; and instant messaging. The discussion presented here is based on a review of the literature and reflection on the experiences of the authors in performing completed research that used those four interaction systems. The focus is on functional effects (e.g. scheduling and other logistics, data transcription and data management), as well as methodological effects (e.g. ability to probe, collecting affective data, and data representation). The authors found that all four methods of data collection produced viable data for the projects they completed, but that some additional issues arose. Five themes emerged that form the organization of the paper: (1) interview scheduling and participant retention; (2) recording and transcribing; (3) data cleaning a...

235 citations


Journal ArticleDOI
01 Aug 2008
TL;DR: This paper reports on the results of an independent evaluation of the techniques presented in the VLDB 2007 paper "Scalable Semantic Web Data Management Using Vertical Partitioning", as well as a complementary analysis of state-of-the-art RDF storage solutions.
Abstract: This paper reports on the results of an independent evaluation of the techniques presented in the VLDB 2007 paper "Scalable Semantic Web Data Management Using Vertical Partitioning", authored by D. Abadi, A. Marcus, S. R. Madden, and K. Hollenbach [1]. We revisit the proposed benchmark and examine both the data and query space coverage. The benchmark is extended to cover a larger portion of the query space in a canonical way. Repeatability of the experiments is assessed using the code base obtained from the authors. Inspired by the proposed vertically-partitioned storage solution for RDF data and the performance figures using a column-store, we conduct a complementary analysis of state-of-the-art RDF storage solutions. To this end, we employ MonetDB/SQL, a fully-functional open source column-store, and a well-known -- for its performance -- commercial row-store DBMS. We implement two relational RDF storage solutions -- triple-store and vertically-partitioned -- in both systems. This allows us to expand the scope of [1] with the performance characterization along both dimensions -- triple-store vs. vertically-partitioned and row-store vs. column-store -- individually, before analyzing their combined effects. A detailed report of the experimental test-bed, as well as an in-depth analysis of the parameters involved, clarify the scope of the solution originally presented and position the results in a broader context by covering more systems.

232 citations


Patent
19 Dec 2008
TL;DR: In this paper, the authors present methods, system, and apparatuses for generating and delivering analytic results for any simple or highly complex problem for which data exists that software or similar automated means can analyze.
Abstract: The present invention comprises methods, system, and apparatuses for generating and delivering analytic results for any simple or highly complex problem for which data exists that software or similar automated means can analyze. The present invention thus contemplates methods, systems, apparatuses, software, software processes, computer-readable medium, and/or data structures to enable performance of these and other features. In one embodiment, a method of the present invention comprises extracting and converting data using a data management component into a form usable by a data mining component, performing data mining to develop a model in response to a question or problem posed by a user.

232 citations


Patent
26 Nov 2008
TL;DR: In this paper, the authors propose a method and apparatus for storing data on application-level activity and other user information to enable real-time multi-dimensional reporting about a user of a mobile data network.
Abstract: A method and apparatus for storing data on application-level activity and other user information to enable real-time multi-dimensional reporting about a user of a mobile data network. A data manager receives information about application-level activity from a mobile data network and stores the information to provide dynamic real-time reporting on network usage. The data manager comprises a database, data processing module, and analytics module. The database stores the application-level data for a predetermined period of time. The data processing module monitors the data to determine if it corresponds to a set of defined reports. If the data is relevant, the processing module updates the defined reports. The analytics module accesses the database to retrieve information satisfying operator queries about network usage. If the operator chooses to convert the query into a defined report, the analytics module creates a newly defined report and populates it accordingly.

192 citations


Book
28 Sep 2008
TL;DR: Master Data Management equips you with a deeply practical, business-focused way of thinking about MDMan understanding that will greatly enhance your ability to communicate with stakeholders and win their support.
Abstract: The key to a successful MDM initiative isnt technology or methods, its people: the stakeholders in the organization and their complex ownership of the data that the initiative will affect.Master Data Management equips you with a deeply practical, business-focused way of thinking about MDMan understanding that will greatly enhance your ability to communicate with stakeholders and win their support. Moreover, it will help you deserve their support: youll master all the details involved in planning and executing an MDM project that leads to measurable improvements in business productivity and effectiveness.* Presents a comprehensive roadmap that you can adapt to any MDM project.* Emphasizes the critical goal of maintaining and improving data quality.* Provides guidelines for determining which data to master.* Examines special issues relating to master data metadata.* Considers a range of MDM architectural styles.* Covers the synchronization of master data across the application infrastructure.

190 citations


Journal ArticleDOI
TL;DR: The research demonstrated that it is possible to transfer high level of geometric and semantic information acquired from BIMs into the geospatial environment and demonstrated that B IMs provide a sufficient level and amount of data management tasks in the site selection and fire response management processes.

189 citations


Proceedings ArticleDOI
19 May 2008
TL;DR: This paper examines some of the issues in the area of data management related to workflow creation, execution, and result management in the context of the entire workflow lifecycle.
Abstract: Scientific workflows play an important role in today's science. Many disciplines rely on workflow technologies to orchestrate the execution of thousands of computational tasks. Much research to-date focuses on efficient, scalable, and robust workflow execution, especially in distributed environments. However, many challenges remain in the area of data management related to workflow creation, execution, and result management. In this paper we examine some of these issues in the context of the entire workflow lifecycle.

Journal ArticleDOI
TL;DR: The idea of Process Data Warehousing is advocated as a means to provide a knowledge management and integration platform for engineering design processes that enables the capture and reuse of design experience, supported by advanced computer science methods.

Journal ArticleDOI
01 Aug 2008
TL;DR: This paper introduces BayesStore, a novel probabilistic data management architecture built on the principle of handling statistical models and probabilistically inference tools as first-class citizens of the database system, and presents BAYESSTORE's uncertainty model based on a novel, first-order statistical model, and redefine traditional query processing operators, to manipulate the data and the probabilism models of thedatabase in an efficient manner.
Abstract: Several real-world applications need to effectively manage and reason about large amounts of data that are inherently uncertain. For instance, pervasive computing applications must constantly reason about volumes of noisy sensory readings for a variety of reasons, including motion prediction and human behavior modeling. Such probabilistic data analyses require sophisticated machine-learning tools that can effectively model the complex spatio/temporal correlation patterns present in uncertain sensory data. Unfortunately, to date, most existing approaches to probabilistic database systems have relied on somewhat simplistic models of uncertainty that can be easily mapped onto existing relational architectures: Probabilistic information is typically associated with individual data tuples, with only limited or no support for effectively capturing and reasoning about complex data correlations. In this paper, we introduce BayesStore, a novel probabilistic data management architecture built on the principle of handling statistical models and probabilistic inference tools as first-class citizens of the database system. Adopting a machine-learning view, BAYESSTORE employs concise statistical relational models to effectively encode the correlation patterns between uncertain data, and promotes probabilistic inference and statistical model manipulation as part of the standard DBMS operator repertoire to support efficient and sound query processing. We present BAYESSTORE's uncertainty model based on a novel, first-order statistical model, and we redefine traditional query processing operators, to manipulate the data and the probabilistic models of the database in an efficient manner. Finally, we validate our approach, by demonstrating the value of exploiting data correlations during query processing, and by evaluating a number of optimizations which significantly accelerate query processing.

Journal ArticleDOI
TL;DR: This work addresses the challenge to record uniform and usable provenance metadata that meets the domain needs while minimizing the modification burden on the service authors and the performance overhead on the workflow engine and the services.
Abstract: The increasing ability for the sciences to sense the world around us is resulting in a growing need for datadriven e-Science applications that are under the control of workflows composed of services on the Grid. The focus of our work is on provenance collection for these workflows that are necessary to validate the workflow and to determine quality of generated data products. The challenge we address is to record uniform and usable provenance metadata that meets the domain needs while minimizing the modification burden on the service authors and the performance overhead on the workflow engine and the services. The framework is based on generating discrete provenance activities during the lifecycle of a workflow execution that can be aggregated to form complex data and process provenance graphs that can span across workflows. The implementation uses a loosely coupled publish-subscribe architecture for propagating these activities, and the capabilities of the system satisfy the needs of detailed provenance collection. A performance evaluation of a prototype finds a minimal performance overhead (in the range of 1% for an eight-service workflow using 271 data products).

Journal ArticleDOI
Yingli Tian1, Lisa M. Brown1, Arun Hampapur1, Max Lu1, Andrew W. Senior1, Chiao-fe Shu1 
30 Sep 2008
TL;DR: The IBM smart surveillance system (S3) is one of the few advanced surveillance systems which provides not only the capability to automatically monitor a scene but also the ability to manage the surveillance data, perform event based retrieval, receive real time event alerts thru standard web infrastructure and extract long term statistical patterns of activity.
Abstract: The increasing need for sophisticated surveillance systems and the move to a digital infrastructure has transformed surveillance into a large scale data analysis and management challenge. Smart surveillance systems use automatic image understanding techniques to extract information from the surveillance data. While the majority of the research and commercial systems have focused on the information extraction aspect of the challenge, very few systems have explored the use of extracted information in the search, retrieval, data management and investigation context. The IBM smart surveillance system (S3) is one of the few advanced surveillance systems which provides not only the capability to automatically monitor a scene but also the capability to manage the surveillance data, perform event based retrieval, receive real time event alerts thru standard web infrastructure and extract long term statistical patterns of activity. The IBM S3 is easily customized to fit the requirements of different applications by using an open-standards based architecture for surveillance.

Journal ArticleDOI
01 Mar 2008
TL;DR: The biomedical informatics research network (BIRN) has developed a federated and distributed infrastructure for the storage, retrieval, analysis, and documentation of biomedical imaging data.
Abstract: The aggregation of imaging, clinical, and behavioral data from multiple independent institutions and researchers presents both a great opportunity for biomedical research as well as a formidable challenge. Many research groups have well-established data collection and analysis procedures, as well as data and metadata format requirements that are particular to that group. Moreover, the types of data and metadata collected are quite diverse, including image, physiological, and behavioral data, as well as descriptions of experimental design, and preprocessing and analysis methods. Each of these types of data utilizes a variety of software tools for collection, storage, and processing. Furthermore sites are reluctant to release control over the distribution and access to the data and the tools. To address these needs, the biomedical informatics research network (BIRN) has developed a federated and distributed infrastructure for the storage, retrieval, analysis, and documentation of biomedical imaging data. The infrastructure consists of distributed data collections hosted on dedicated storage and computational resources located at each participating site, a federated data management system and data integration environment, an extensible markup language (XML) schema for data exchange, and analysis pipelines, designed to leverage both the distributed data management environment and the available grid computing resources.

Patent
27 Oct 2008
TL;DR: In this paper, the authors present a system and method for knowledge management in an organization, which employs an intranet site whereby members of the organization can easily and efficiently access explicit knowledge and tacit knowledge relevant to complete processes.
Abstract: A system and method for knowledge management in an organization. The system and method employs an intranet site whereby members of the organization can easily and efficiently access explicit knowledge and tacit knowledge relevant to complete processes in an organization. As members of an organization communicate and collaborate with each other using the present disclosure, new knowledge or ideas or best practices may form as a result of the collaboration which then should also be captured and codified as explicit knowledge.

Journal ArticleDOI
TL;DR: In this paper, the authors highlight many of the traditional research themes in the management of technology as well as research themes on emerging topics such as those that appear in this focused issue, and conclude by offering a list of research themes that are of particular interest to the Management of Technology Department of Production and Operations Management.
Abstract: We highlight many of the traditional research themes in the management of technology as well as research themes on emerging topics such as those that appear in this focused issue. The discussion demonstrates the breadth and multidisciplinary nature of management of technology as well as the variety of methods employed in management of technology research. We conclude by offering a list of research themes that are of particular interest to the Management of Technology Department of Production and Operations Management.

Book
05 Jun 2008
TL;DR: This book systematically introduces MDM key concepts and technical themes, explains its business case, and illuminates how it interrelates with and enables SOA.
Abstract: The Only Complete Technical Primer for MDM Planners, Architects, and ImplementersCompanies moving toward flexible SOA architectures often face difficult information management and integration challenges. The master data they rely on is often stored and managed in ways that are redundant, inconsistent, inaccessible, non-standardized, and poorly governed. Using Master Data Management (MDM), organizations can regain control of their master data, improve corresponding business processes, and maximize its value in SOA environments.Enterprise Master Data Management provides an authoritative, vendor-independent MDM technical reference for practitioners: architects, technical analysts, consultants, solution designers, and senior IT decisionmakers. Written by the IBM data management innovators who are pioneering MDM, this book systematically introduces MDMs key concepts and technical themes, explains its business case, and illuminates how it interrelates with and enables SOA.Drawing on their experience with cutting-edge projects, the authors introduce MDM patterns, blueprints, solutions, and best practices published nowhere elseeverything you need to establish a consistent, manageable set of master data, and use it for competitive advantage.Coverage includesHow MDM and SOA complement each otherUsing the MDM Reference Architecture to position and design MDM solutions within an enterpriseAssessing the value and risks to master data and applying the right security controlsUsing PIM-MDM and CDI-MDM Solution Blueprints to address industry-specific information management challengesExplaining MDM patterns as enablers to accelerate consistent MDM deploymentsIncorporating MDM solutions into existing IT landscapes via MDM Integration BlueprintsLeveraging master data as an enterprise assetbringing people, processes, and technology together with MDM and data governanceBest practices in MDM deployment, including data warehouse and SAP integration

Journal ArticleDOI
01 Aug 2008
TL;DR: Clustera is designed for extensibility, enabling the system to be easily extended to handle a wide variety of job types ranging from computationally-intensive, long-running jobs with minimal I/O requirements to complex SQL queries over massive relational tables.
Abstract: This paper introduces Clustera, an integrated computation and data management system. In contrast to traditional cluster-management systems that target specific types of workloads, Clustera is designed for extensibility, enabling the system to be easily extended to handle a wide variety of job types ranging from computationally-intensive, long-running jobs with minimal I/O requirements to complex SQL queries over massive relational tables. Another unique feature of Clustera is the way in which the system architecture exploits modern software building blocks including application servers and relational database systems in order to realize important performance, scalability, portability and usability benefits. Finally, experimental evaluation suggests that Clustera has good scale-up properties for SQL processing, that Clustera delivers performance comparable to Hadoop for MapReduce processing and that Clustera can support higher job throughput rates than previously published results for the Condor and CondorJ2 batch computing systems.

Posted Content
TL;DR: An overview of Knowledge Management and various aspects of secure knowledge management is presented and a case study of knowledge management activities at Tata Steel is discussed.
Abstract: Knowledge has been lately recognized as one of the most important assets of organizations. Managing knowledge has grown to be imperative for the success of a company. This paper presents an overview of Knowledge Management and various aspects of secure knowledge management. A case study of knowledge management activities at Tata Steel is also discussed

Patent
11 Apr 2008
TL;DR: In this paper, a system and method for managing a plurality of clients is presented, where a request to implement a change in configuration data is received from a user and the configuration data relates to an operation of a client.
Abstract: System and method for managing a plurality of clients A request to implement a change in configuration data is received from a user The configuration data relates to an operation of a client The received request is stored in a memory area Computer-executable instructions request topology data from the memory area based on the configuration data to identify the client The requested topology data is received from the memory area Computer-executable instructions identify a notification service associated with the client and notify the identified notification service of the change in the configuration data

Journal ArticleDOI
TL;DR: A general design template is proposed for a framework to help researchers and practitioners to coordinate and federate their efforts to improve the general knowledge of the links between the habitat dynamics and biological aquatic responses.
Abstract: The conclusions of numerous stream restoration assessments all around the world are extremely clear and convergent: there has been insufficient appropriate monitoring to improve general knowledge and expertise. In the specialized field of instream flow alterations, we consider that there are several opportunities comparable to full-size experiments. Hundreds of water management decisions related to instream flow releases have been made by government agencies, native peoples, and non-governmental organizations around the world. These decisions are based on different methods and assumptions and many flow regimes have been adopted by formal or informal rules and regulations. Although, there have been significant advances in analytical capabilities, there has been very little validation monitoring of actual outcomes or research related to the response of aquatic dependent species to new flow regimes. In order to be able to detect these kinds of responses and to better guide decision, a general design template is proposed. The main steps of this template are described and discussed, in terms of objectives, hypotheses, variables, time scale, data management, and information, in the spirit of adaptive management. The adoption of such a framework is not always easy, due to differing interests of actors for the results, regarding the duration of monitoring, nature of funding and differential timetables between facilities managers and technicians. Nevertheless, implementation of such a framework could help researchers and practitioners to coordinate and federate their efforts to improve the general knowledge of the links between the habitat dynamics and biological aquatic responses.

Journal ArticleDOI
TL;DR: The authors assesses the contribution of management of knowledge across organizational and professional boundaries towards improved public services, and empirically investigate the potential for knowledge sharing within the context of the NHS modernization agenda, taking as their focus the current patient safety policy agenda.
Abstract: This article assesses the contribution of management of knowledge across organizational and professional boundaries towards improved public services. We empirically investigate the potential for knowledge sharing within the context of the NHS modernization agenda, taking as our focus the current ‘patient safety’ policy agenda. Specifically, we evaluate the introduction of a knowledge management system, namely the National Reporting and Learning System (NRSL) and its impact in the area of operating theatres within a university teaching hospital. We suggest that government policy in this area needs to reflect more upon limits to the management of knowledge and issues of the nature of knowledge, professional cultures and institutional power and politics.

Journal ArticleDOI
TL;DR: This article describes an implementation of the semantic provenance framework for glycoproteomics, which comprises expressive provenance information and domain-specific provenance ontologies and applies this information to data management.
Abstract: Provenance information in eScience is metadata that's critical to effectively manage the exponentially increasing volumes of scientific data from industrial-scale experiment protocols. Semantic provenance, based on domain-specific provenance ontologies, lets software applications unambiguously interpret data in the correct context. The semantic provenance framework for eScience data comprises expressive provenance information and domain-specific provenance ontologies and applies this information to data management. The authors' "two degrees of separation" approach advocates the creation of high-quality provenance information using specialized services. In contrast to workflow engines generating provenance information as a core functionality, the specialized provenance services are integrated into a scientific workflow on demand. This article describes an implementation of the semantic provenance framework for glycoproteomics.

Patent
05 May 2008
TL;DR: In this article, a standard way to manage the south-side data of virtual functions is provided, such as a MAC address for a virtual instance of an Ethernet device, to LPARs sharing the adapter.
Abstract: A hypervisor, during device discovery, has code which can examine the south-side management data structure in an adapter's configuration space and determine the type of device which is being configured. The hypervisor may copy the south-side management data structure to a hardware management console (HMC) and the HMC can populate the data structure with south-side data and then pass the structure to the hypervisor to replace the data structure on the adapter. In another embodiment the hypervisor may copy the data structure to the HMC and the HMC can instruct the hypervisor to fill-in the data structure, a virtual function at a time, with south-side management data associations. The administrator can assign south-side data, such as a MAC address for a virtual instance of an Ethernet device, to LPARs sharing the adapter. Thus, a standard way to manage the south-side data of virtual functions is provided.

Journal ArticleDOI
01 Jul 2008
TL;DR: The current status of AliEn will be illustrated, as well as the performance of the system during the data challenges, and the future AliEn development roadmap is described.
Abstract: Starting from mid-2008, the ALICE detector at CERN LHC will collect data at a rate of 4PB per year. ALICE will use exclusively distributed Grid resources to store, process and analyse this data. The top-level management of the Grid resources is done through the AliEn (ALICE Environment) system, which is in continuous development since year 2000. AliEn presents several original solutions, which have shown their viability in a number of large exercises of increasing complexity called Data Challenges. This paper describes the AliEn architecture: Job Management, Data Management and UI. The current status of AliEn will be illustrated, as well as the performance of the system during the data challenges. The paper also describes the future AliEn development roadmap.

Journal ArticleDOI
TL;DR: It is found that existing Radio Frequency IDentification (RFID) data management scheme has to be modified so as to provide end‐to‐end traceability.
Abstract: Purpose – The paper aims to propose a novel dynamic tracing task model to enhance the traceability range along the supply chain beyond simple distribution channels. It further extends the study by implementing the system architecture with the proposed data model to support the dynamic tracing task.Design/methodology/approach – Typical processes of supply chain in manufacturing industries using bill of material data to extract and define information requirements are followed. The data elements are systematically selected and explained in the proposed model step by step.Findings – This paper found that existing Radio Frequency IDentification (RFID) data management scheme has to be modified so as to provide end‐to‐end traceability.Research limitations/implications – Validation of the proposed model and system architecture should be done through actual implementation in industrial settings.Practical implications – The paper gives an insight to many system managers and executors in how full traceability along ...

Journal ArticleDOI
01 Jul 2008
TL;DR: Improvements and enhancements to Don Quijote2 based on the increasing demands for ATLAS data management are presented, showing that DQ2 is capable of handling data up to and beyond the requirements of full-scale data-taking.
Abstract: The ATLAS detector at CERN's Large Hadron Collider presents data handling requirements on an unprecedented scale. From 2008 on the ATLAS distributed data management system, Don Quijote2 (DQ2), must manage tens of petabytes of experiment data per year, distributed globally via the LCG, OSG and NDGF computing grids, now commonly known as the WLCG. Since its inception in 2005 DQ2 has continuously managed all experiment data for the ATLAS collaboration, which now comprises over 3000 scientists participating from more than 150 universities and laboratories in 34 countries. Fulfilling its primary requirement of providing a highly distributed, fault-tolerant and scalable architecture DQ2 was successfully upgraded from managing data on a terabyte-scale to managing data on a petabyte-scale. We present improvements and enhancements to DQ2 based on the increasing demands for ATLAS data management. We describe performance issues, architectural changes and implementation decisions, the current state of deployment in test and production as well as anticipated future improvements. Test results presented here show that DQ2 is capable of handling data up to and beyond the requirements of full-scale data-taking.