scispace - formally typeset
Search or ask a question
Institution

Philips

CompanyVantaa, Finland
About: Philips is a company organization based out in Vantaa, Finland. It is known for research contribution in the topics: Signal & Layer (electronics). The organization has 68260 authors who have published 99663 publications receiving 1882329 citations. The organization is also known as: Koninklijke Philips Electronics N.V. & Royal Philips Electronics.


Papers
More filters
Patent
25 Feb 2002
TL;DR: In this article, a process of accessing data in a database, inputting the accessed data to a form, using the form with accessed data by a first user, and monitoring the database to detect changes to the accesses by a second user while the form is being used by the first user.
Abstract: A process of accessing data in a database (13), inputting the accessed data to a form, using the form with the accessed data by a first user, monitoring the database (13) to detect changes to the accessed data by a second user while the form is being used by the first user, updating the accessed data (20a-n) in the form while being used by the first user in accordande with rules corresponding to the detected changes, and displaying update status of the accessed data in accordance with the updating. The update status can indicate if the accessed data has not been changed since the first user began using the accessed data; if the first user has changed the accessed data; and if the second user has changed the accessed data while the first user is using the accessed data. The process can be implemented in a standalone processing device or in a network (10).

211 citations

Journal ArticleDOI
TL;DR: Both the opportunities and challenges posed to biomedical research by the increasing ability to tackle large datasets are discussed, including the need for standardization of data content, format, and clinical definitions.
Abstract: For over a decade the term "Big data" has been used to describe the rapid increase in volume, variety and velocity of information available, not just in medical research but in almost every aspect of our lives. As scientists, we now have the capacity to rapidly generate, store and analyse data that, only a few years ago, would have taken many years to compile. However, "Big data" no longer means what it once did. The term has expanded and now refers not to just large data volume, but to our increasing ability to analyse and interpret those data. Tautologies such as "data analytics" and "data science" have emerged to describe approaches to the volume of available information as it grows ever larger. New methods dedicated to improving data collection, storage, cleaning, processing and interpretation continue to be developed, although not always by, or for, medical researchers. Exploiting new tools to extract meaning from large volume information has the potential to drive real change in clinical practice, from personalized therapy and intelligent drug design to population screening and electronic health record mining. As ever, where new technology promises "Big Advances," significant challenges remain. Here we discuss both the opportunities and challenges posed to biomedical research by our increasing ability to tackle large datasets. Important challenges include the need for standardization of data content, format, and clinical definitions, a heightened need for collaborative networks with sharing of both data and expertise and, perhaps most importantly, a need to reconsider how and when analytic methodology is taught to medical researchers. We also set "Big data" analytics in context: recent advances may appear to promise a revolution, sweeping away conventional approaches to medical science. However, their real promise lies in their synergy with, not replacement of, classical hypothesis-driven methods. The generation of novel, data-driven hypotheses based on interpretable models will always require stringent validation and experimental testing. Thus, hypothesis-generating research founded on large datasets adds to, rather than replaces, traditional hypothesis driven science. Each can benefit from the other and it is through using both that we can improve clinical practice.

211 citations

Journal ArticleDOI
TL;DR: Clinicians, researchers, and citizens need improved methods, tools, and training to generate, analyze, and query data effectively and contribute to creating the European Single Market for health, which will improve health and healthcare for all Europeans.
Abstract: Medicine and healthcare are undergoing profound changes. Whole-genome sequencing and high-resolution imaging technologies are key drivers of this rapid and crucial transformation. Technological innovation combined with automation and miniaturization has triggered an explosion in data production that will soon reach exabyte proportions. How are we going to deal with this exponential increase in data production? The potential of “big data” for improving health is enormous but, at the same time, we face a wide range of challenges to overcome urgently. Europe is very proud of its cultural diversity; however, exploitation of the data made available through advances in genomic medicine, imaging, and a wide range of mobile health applications or connected devices is hampered by numerous historical, technical, legal, and political barriers. European health systems and databases are diverse and fragmented. There is a lack of harmonization of data formats, processing, analysis, and data transfer, which leads to incompatibilities and lost opportunities. Legal frameworks for data sharing are evolving. Clinicians, researchers, and citizens need improved methods, tools, and training to generate, analyze, and query data effectively. Addressing these barriers will contribute to creating the European Single Market for health, which will improve health and healthcare for all Europeans.

211 citations

Proceedings Article
01 Dec 2016
TL;DR: The authors proposed a stacked residual LSTM network for paraphrase generation, which adds residual connections between LSTMs layers for efficient training, and achieved state-of-the-art performance on three different datasets: PPDB, WikiAnswers and MSCOCO.
Abstract: In this paper, we propose a novel neural approach for paraphrase generation. Conventional paraphrase generation methods either leverage hand-written rules and thesauri-based alignments, or use statistical machine learning principles. To the best of our knowledge, this work is the first to explore deep learning models for paraphrase generation. Our primary contribution is a stacked residual LSTM network, where we add residual connections between LSTM layers. This allows for efficient training of deep LSTMs. We evaluate our model and other state-of-the-art deep learning models on three different datasets: PPDB, WikiAnswers, and MSCOCO. Evaluation results demonstrate that our model outperforms sequence to sequence, attention-based, and bi-directional LSTM models on BLEU, METEOR, TER, and an embedding-based sentence similarity metric.

211 citations

Journal ArticleDOI
TL;DR: This work describes a more general approach that allows causal models to be applied to any lifecycle and enables decision-makers to reason in a way that is not possible with regression-based models.
Abstract: An important decision in software projects is when to stop testing. Decision support tools for this have been built using causal models represented by Bayesian Networks (BNs), incorporating empirical data and expert judgement. Previously, this required a custom BN for each development lifecycle. We describe a more general approach that allows causal models to be applied to any lifecycle. The approach evolved through collaborative projects and captures significant commercial input. For projects within the range of the models, defect predictions are very accurate. This approach enables decision-makers to reason in a way that is not possible with regression-based models.

211 citations


Authors

Showing all 68268 results

NameH-indexPapersCitations
Mark Raymond Adams1471187135038
Dario R. Alessi13635474753
Mohammad Khaja Nazeeruddin12964685630
Sanjay Kumar120205282620
Mark W. Dewhirst11679757525
Carl G. Figdor11656652145
Mathias Fink11690051759
David B. Solit11446952340
Giulio Tononi11451158519
Jie Wu112153756708
Claire M. Fraser10835276292
Michael F. Berger10754052426
Nikolaus Schultz106297120240
Rolf Müller10490550027
Warren J. Manning10260638781
Network Information
Related Institutions (5)
Katholieke Universiteit Leuven
176.5K papers, 6.2M citations

91% related

Georgia Institute of Technology
119K papers, 4.6M citations

88% related

Stanford University
320.3K papers, 21.8M citations

88% related

National University of Singapore
165.4K papers, 5.4M citations

88% related

IBM
253.9K papers, 7.4M citations

88% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20235
202239
2021898
20201,428
20191,665
20181,378