Institution
Philips
Company•Vantaa, Finland•
About: Philips is a company organization based out in Vantaa, Finland. It is known for research contribution in the topics: Signal & Layer (electronics). The organization has 68260 authors who have published 99663 publications receiving 1882329 citations. The organization is also known as: Koninklijke Philips Electronics N.V. & Royal Philips Electronics.
Papers published on a yearly basis
Papers
More filters
•
25 Feb 2002TL;DR: In this article, a process of accessing data in a database, inputting the accessed data to a form, using the form with accessed data by a first user, and monitoring the database to detect changes to the accesses by a second user while the form is being used by the first user.
Abstract: A process of accessing data in a database (13), inputting the accessed data to a form, using the form with the accessed data by a first user, monitoring the database (13) to detect changes to the accessed data by a second user while the form is being used by the first user, updating the accessed data (20a-n) in the form while being used by the first user in accordande with rules corresponding to the detected changes, and displaying update status of the accessed data in accordance with the updating. The update status can indicate if the accessed data has not been changed since the first user began using the accessed data; if the first user has changed the accessed data; and if the second user has changed the accessed data while the first user is using the accessed data. The process can be implemented in a standalone processing device or in a network (10).
211 citations
••
TL;DR: Both the opportunities and challenges posed to biomedical research by the increasing ability to tackle large datasets are discussed, including the need for standardization of data content, format, and clinical definitions.
Abstract: For over a decade the term "Big data" has been used to describe the rapid increase in volume, variety and velocity of information available, not just in medical research but in almost every aspect of our lives. As scientists, we now have the capacity to rapidly generate, store and analyse data that, only a few years ago, would have taken many years to compile. However, "Big data" no longer means what it once did. The term has expanded and now refers not to just large data volume, but to our increasing ability to analyse and interpret those data. Tautologies such as "data analytics" and "data science" have emerged to describe approaches to the volume of available information as it grows ever larger. New methods dedicated to improving data collection, storage, cleaning, processing and interpretation continue to be developed, although not always by, or for, medical researchers. Exploiting new tools to extract meaning from large volume information has the potential to drive real change in clinical practice, from personalized therapy and intelligent drug design to population screening and electronic health record mining. As ever, where new technology promises "Big Advances," significant challenges remain. Here we discuss both the opportunities and challenges posed to biomedical research by our increasing ability to tackle large datasets. Important challenges include the need for standardization of data content, format, and clinical definitions, a heightened need for collaborative networks with sharing of both data and expertise and, perhaps most importantly, a need to reconsider how and when analytic methodology is taught to medical researchers. We also set "Big data" analytics in context: recent advances may appear to promise a revolution, sweeping away conventional approaches to medical science. However, their real promise lies in their synergy with, not replacement of, classical hypothesis-driven methods. The generation of novel, data-driven hypotheses based on interpretable models will always require stringent validation and experimental testing. Thus, hypothesis-generating research founded on large datasets adds to, rather than replaces, traditional hypothesis driven science. Each can benefit from the other and it is through using both that we can improve clinical practice.
211 citations
••
University of Lyon1, Institute for Systems Biology2, University of Luxembourg3, Wellcome Trust Sanger Institute4, Semmelweis University5, Linköping University6, Pfizer7, International Association of Classification Societies8, Max Planck Society9, Austrian Academy of Sciences10, Medical University of Vienna11, University of Florida12, KPMG13, Leiden University14, CERN15, Utrecht University16, European Bioinformatics Institute17, Saarland University18, Forschungszentrum Jülich19, Leiden University Medical Center20, Imperial College London21, Vienna University of Technology22, Association of the British Pharmaceutical Industry23, RWTH Aachen University24, University of Sheffield25, University of Leeds26, King's College London27, National and Kapodistrian University of Athens28, Fraunhofer Society29, Janssen Pharmaceutica30, University of Manchester31, Curie Institute32, Philips33, Katholieke Universiteit Leuven34, University of Valencia35, Swiss Institute of Bioinformatics36, Polaris Industries37
TL;DR: Clinicians, researchers, and citizens need improved methods, tools, and training to generate, analyze, and query data effectively and contribute to creating the European Single Market for health, which will improve health and healthcare for all Europeans.
Abstract: Medicine and healthcare are undergoing profound changes. Whole-genome sequencing and high-resolution imaging technologies are key drivers of this rapid and crucial transformation. Technological innovation combined with automation and miniaturization has triggered an explosion in data production that will soon reach exabyte proportions. How are we going to deal with this exponential increase in data production? The potential of “big data” for improving health is enormous but, at the same time, we face a wide range of challenges to overcome urgently. Europe is very proud of its cultural diversity; however, exploitation of the data made available through advances in genomic medicine, imaging, and a wide range of mobile health applications or connected devices is hampered by numerous historical, technical, legal, and political barriers. European health systems and databases are diverse and fragmented. There is a lack of harmonization of data formats, processing, analysis, and data transfer, which leads to incompatibilities and lost opportunities. Legal frameworks for data sharing are evolving. Clinicians, researchers, and citizens need improved methods, tools, and training to generate, analyze, and query data effectively. Addressing these barriers will contribute to creating the European Single Market for health, which will improve health and healthcare for all Europeans.
211 citations
•
01 Dec 2016TL;DR: The authors proposed a stacked residual LSTM network for paraphrase generation, which adds residual connections between LSTMs layers for efficient training, and achieved state-of-the-art performance on three different datasets: PPDB, WikiAnswers and MSCOCO.
Abstract: In this paper, we propose a novel neural approach for paraphrase generation. Conventional paraphrase generation methods either leverage hand-written rules and thesauri-based alignments, or use statistical machine learning principles. To the best of our knowledge, this work is the first to explore deep learning models for paraphrase generation. Our primary contribution is a stacked residual LSTM network, where we add residual connections between LSTM layers. This allows for efficient training of deep LSTMs. We evaluate our model and other state-of-the-art deep learning models on three different datasets: PPDB, WikiAnswers, and MSCOCO. Evaluation results demonstrate that our model outperforms sequence to sequence, attention-based, and bi-directional LSTM models on BLEU, METEOR, TER, and an embedding-based sentence similarity metric.
211 citations
••
TL;DR: This work describes a more general approach that allows causal models to be applied to any lifecycle and enables decision-makers to reason in a way that is not possible with regression-based models.
Abstract: An important decision in software projects is when to stop testing. Decision support tools for this have been built using causal models represented by Bayesian Networks (BNs), incorporating empirical data and expert judgement. Previously, this required a custom BN for each development lifecycle. We describe a more general approach that allows causal models to be applied to any lifecycle. The approach evolved through collaborative projects and captures significant commercial input. For projects within the range of the models, defect predictions are very accurate. This approach enables decision-makers to reason in a way that is not possible with regression-based models.
211 citations
Authors
Showing all 68268 results
Name | H-index | Papers | Citations |
---|---|---|---|
Mark Raymond Adams | 147 | 1187 | 135038 |
Dario R. Alessi | 136 | 354 | 74753 |
Mohammad Khaja Nazeeruddin | 129 | 646 | 85630 |
Sanjay Kumar | 120 | 2052 | 82620 |
Mark W. Dewhirst | 116 | 797 | 57525 |
Carl G. Figdor | 116 | 566 | 52145 |
Mathias Fink | 116 | 900 | 51759 |
David B. Solit | 114 | 469 | 52340 |
Giulio Tononi | 114 | 511 | 58519 |
Jie Wu | 112 | 1537 | 56708 |
Claire M. Fraser | 108 | 352 | 76292 |
Michael F. Berger | 107 | 540 | 52426 |
Nikolaus Schultz | 106 | 297 | 120240 |
Rolf Müller | 104 | 905 | 50027 |
Warren J. Manning | 102 | 606 | 38781 |