scispace - formally typeset
Search or ask a question

Showing papers by "James Bailey published in 2022"


Journal Article
TL;DR: This work provides a novel algorithm for learning optimal classification trees based on dynamic programming and search and shows in a detailed experimental study that this approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
Abstract: Decision tree learning is a widely used approach in machine learning, favoured in applications that require concise and interpretable models. Heuristic methods are traditionally used to quickly produce models with reasonably high accuracy. A commonly criticised point, however, is that the resulting trees may not necessarily be the best representation of the data in terms of accuracy and size. In recent years, this motivated the development of optimal classification tree algorithms that globally optimise the decision tree in contrast to heuristic methods that perform a sequence of locally optimal decisions. We follow this line of work and provide a novel algorithm for learning optimal classification trees based on dynamic programming and search. Our algorithm supports constraints on the depth of the tree and number of nodes. The success of our approach is attributed to a series of specialised techniques that exploit properties unique to classification trees. Whereas algorithms for optimal classification trees have traditionally been plagued by high runtimes and limited scalability, we show in a detailed experimental study that our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances, providing several orders of magnitude improvements and notably contributing towards the practical use of optimal decision trees.

15 citations


Proceedings ArticleDOI
25 Jul 2022
TL;DR: This work proposes a novel Interaction Graph Transformer network for skeleton-based interaction recognition via modeling the interactive body parts as graphs, which outperforms the state-of-the-art with a significant margin.
Abstract: . Human interaction recognition is very important in many applications. One crucial cue in recognizing an interaction is the interactive body parts. In this work, we propose a novel Interaction Graph Transformer (IGFormer) network for skeleton-based interaction recognition via modeling the interactive body parts as graphs. More specifically, the proposed IGFormer constructs interaction graphs according to the semantic and distance correlations between the interactive body parts, and enhances the representation of each person by aggregating the information of the interactive body parts based on the learned graphs. Fur-thermore, we propose a Semantic Partition Module to transform each human skeleton sequence into a Body-Part-Time sequence to better capture the spatial and temporal information of the skeleton sequence for learning the graphs. Extensive experiments on three benchmark datasets demonstrate that our model outperforms the state-of-the-art with a significant margin.

4 citations


Journal ArticleDOI
TL;DR: A novel divide and conquer algorithm based on transition points to reason over exact optimization problems and predict the coefficients using the optimization loss that outperforms existing exact frameworks and can reason over hard combinatorial problems better than surrogate methods.
Abstract: The predict+optimize problem combines machine learning and combinatorial optimization by predicting the problem coefficients first and then using these coefficients to solve the optimization problem. While this problem can be solved in two separate stages, recent research shows end to end models can achieve better results. This requires differentiating through a discrete combinatorial function. Models that use differentiable surrogates are prone to approximation errors, while existing exact models are limited to dynamic programming, or they do not generalize well with scarce data. In this work we propose a novel divide and conquer algorithm based on transition points to reason over exact optimization problems and predict the coefficients using the optimization loss. Moreover, our model is not limited to dynamic programming problems. We also introduce a greedy version, which achieves similar results with less computation. In comparison with other predict+optimize frameworks, we show our method outperforms existing exact frameworks and can reason over hard combinatorial problems better than surrogate methods.

4 citations


Journal ArticleDOI
TL;DR: In this article , the authors provide a taxonomy of existing image AutoDA approaches and discuss their pros and cons, and propose several potential directions for future improvements, and identify three key components of a standard AutoDA model: a search space, a search algorithm and an evaluation function.
Abstract: Abstract In recent years, one of the most popular techniques in the computer vision community has been the deep learning technique. As a data-driven technique, deep model requires enormous amounts of accurately labelled training data, which is often inaccessible in many real-world applications. A data-space solution is Data Augmentation (DA), that can artificially generate new images out of original samples. Image augmentation strategies can vary by dataset, as different data types might require different augmentations to facilitate model training. However, the design of DA policies has been largely decided by the human experts with domain knowledge, which is considered to be highly subjective and error-prone. To mitigate such problem, a novel direction is to automatically learn the image augmentation policies from the given dataset using Automated Data Augmentation (AutoDA) techniques. The goal of AutoDA models is to find the optimal DA policies that can maximize the model performance gains. This survey discusses the underlying reasons of the emergence of AutoDA technology from the perspective of image classification. We identify three key components of a standard AutoDA model: a search space, a search algorithm and an evaluation function. Based on their architecture, we provide a systematic taxonomy of existing image AutoDA approaches. This paper presents the major works in AutoDA field, discussing their pros and cons, and proposing several potential directions for future improvements.

3 citations


Proceedings ArticleDOI
29 Apr 2022
TL;DR: This paper proposes a machine learning-based pipeline that leverages and combines users’ natural gaze activity, the semantic knowledge from their vocal utterances and the synchronicity between gaze and speech data to facilitate users' interaction.
Abstract: Gaze and speech are rich contextual sources of information that, when combined, can result in effective and rich multimodal interactions. This paper proposes a machine learning-based pipeline that leverages and combines users’ natural gaze activity, the semantic knowledge from their vocal utterances and the synchronicity between gaze and speech data to facilitate users’ interaction. We evaluated our proposed approach on an existing dataset, which involved 32 participants recording voice notes while reading an academic paper. Using a Logistic Regression classifier, we demonstrate that our proposed multimodal approach maps voice notes with accurate text passages with an average F1-Score of 0.90. Our proposed pipeline motivates the design of multimodal interfaces that combines natural gaze and speech patterns to enable robust interactions.

2 citations


Proceedings ArticleDOI
29 Apr 2022
TL;DR: For instance, the authors found that taking notes using voice leads to a higher conceptual understanding of the text when compared to typing the notes and that using voice triggers generative processes that result in learners taking more elaborate and comprehensive notes.
Abstract: Though recent technological advances have enabled note-taking through different modalities (e.g., keyboard, digital ink, voice), there is still a lack of understanding of the effect of the modality choice on learning. In this paper, we compared two note-taking input modalities—keyboard and voice—to study their effects on participants’ understanding of learning content. We conducted a study with 60 participants in which they were asked to take notes using voice or keyboard on two independent digital text passages while also making a judgment about their performance on an upcoming test. We built mixed-effects models to examine the effect of the note-taking modality on learners’ text comprehension, the content of notes and their meta-comprehension judgement. Our findings suggest that taking notes using voice leads to a higher conceptual understanding of the text when compared to typing the notes. We also found that using voice triggers generative processes that result in learners taking more elaborate and comprehensive notes. The findings of the study imply that note-taking tools designed for digital learning environments could incorporate voice as an input modality to promote effective note-taking and higher conceptual understanding of the text.

1 citations


Journal ArticleDOI
TL;DR: In this paper , photonic Doppler velocimetry (PDV) was used for time and spatially resolved electron density measurements in a photoionized gas cell.
Abstract: Plasma density measurements are key to a wide variety of high-energy-density (HED) and laboratory astrophysics experiments. We present a creative application of photonic Doppler velocimetry (PDV) from which time- and spatially resolved electron density measurements can be made. PDV has been implemented for the first time in close proximity, ∼6 cm, to the high-intensity radiation flux produced by a z-pinch dynamic hohlraum on the Z-machine. Multiple PDV probes were incorporated into the photoionized gas cell platform. Two probes, spaced 4 mm apart, were used to assess plasma density and uniformity in the central region of the gas cell during the formation of the plasma. Electron density time histories with subnanosecond resolution were extracted from PDV measurements taken from the gas cells fielded with neon at 15 Torr. As well, a null shot with no gas fill in the cell was fielded. A major achievement was the low noise high-quality measurements made in the harsh environment produced by the mega-joules of x-ray energy emitted at the collapse of the z-pinch implosion. To evaluate time dependent radiation induced effects in the fiber optic system, two PDV noise probes were included on either side of the gas cell. The success of this alternative use of PDV demonstrates that it is a reliable, precise, and affordable new electron density diagnostic for radiation driven experiments and more generally HED experiments.

1 citations


Journal ArticleDOI
01 May 2022-Cureus
TL;DR: Assessment of primary care personnel during coach integration into primary care teams through the Management of Diabetes in Everyday Life (MODEL) study revealed support for a team-based approach and recognition of the potential of coaches to enhance care.
Abstract: The purpose of this mixed-methods, cross-sectional study was to assess the acceptability, effectiveness, and credibility of lay health coaches from the perspective of primary care personnel during coach integration into primary care teams through the Management of Diabetes in Everyday Life (MODEL) study. Surveys of 46 primary care clinic personnel were conducted in June 2017 and July 2017 to assess the acceptability, effectiveness, and credibility of lay health coaches in the clinics. Clinic personnel rated coach acceptability, impact, and credibility on a five-point Likert scale as 3.78, 3.76-4.04, and 3.71-3.95, respectively. Additionally, interviews revealed support for a team-based approach and recognition of the potential of coaches to enhance care. In the interviews clinic personnel also reported a lack of provider time to counsel patients as well as a need for improved provider-coach communication.

Journal ArticleDOI
TL;DR: In this article , Hei Stark line-shape measurements at conditions relevant to DB atmospheres (T electron ≈12,000-17,000 K, n electron = 1017 cm−3) were performed using X-rays from Sandia National Laboratories' Z-machine to create a uniform hydrogen-helium mixture plasma.
Abstract: Accurate helium White Dwarf (DB) masses are critical for understanding the star’s evolution. DB masses derived from the spectroscopic and photometric methods are inconsistent. Photometric masses agree better with currently accepted DB evolutionary theories and are mostly consistent across a large range of surface temperatures. Spectroscopic masses rely on untested He i Stark line-shape and Van der Waals broadening predictions, show unexpected surface temperature trends, and are thus viewed as less reliable. To test this conclusion, we present in this paper detailed He i Stark line-shape measurements at conditions relevant to DB atmospheres (T electron ≈12,000–17,000 K, n electron ≈ 1017 cm−3). We use X-rays from Sandia National Laboratories’ Z-machine to create a uniform ≈120 mm long hydrogen–helium mixture plasma. Van der Waals broadening is negligible at our experimental conditions, allowing us to measure He i Stark profiles only. Hβ, which has been well-studied in our platform and elsewhere, serves as the n e diagnostic. We find that He i Stark broadening models used in DB analyses are accurate within errors at tested conditions. It therefore seems unlikely that line-shape models are solely responsible for the observed spectroscopic mass trends. Our results should motivate the WD community to further scrutinize the validity of other spectroscopic and photometric input parameters, like atmospheric structure assumptions and convection corrections. These parameters can significantly change the derived DB mass. Identifying potential weaknesses in any input parameters could further our understanding of DBs, help elucidate their evolutionary origins, and strengthen confidence in both spectroscopic and photometric masses.

Proceedings ArticleDOI
21 Sep 2022
TL;DR: The Adaptive Local-Component-aware Graph Convolutional Net- work, which replaces the comparison metric with a focused sum of similarity measurements on aligned local embedding of action-critical spatial/temporal segments, helps the model reach state-of-the-art.
Abstract: Skeleton-based action recognition receives increasing attention because skeleton sequences reduce training complexity by eliminating visual information irrelevant to actions. To further improve sample efficiency, meta-learning-based one-shot learning solutions were developed for skeleton-based action recognition. These methods predict by finding the nearest neighbors according to the similarity between instance-level global embedding. However, such measurement holds unstable representativity due to inadequate generalized learning on the averaged local invariant and noisy features, while intuitively, steady and fine-grained recognition relies on determining key local body movements. To address this limitation, we present the Adaptive Local-Component-aware Graph Convolutional Network, which replaces the comparison metric with a focused sum of similarity measurements on aligned local embedding of action-critical spatial/temporal segments. Comprehensive one-shot experiments on the public benchmark of NTURGB+D 120 indicate that our method provides a stronger representation than the global embedding and helps our model reach state-of-the-art.


Journal ArticleDOI
01 Mar 2022-Cureus
TL;DR: The study demonstrates the feasibility of providing patient assistance with scheduling rapid primary care follow-up appointments at the time of discharge and the potential to improve care transitions and access to primary care among patients living in medically underserved areas.
Abstract: The management of diabetes, like many other chronic conditions, depends on effective primary care engagement. Patients with diabetes without a usual source of care have a higher risk of uncontrolled disease, hospitalizations, and early death. Our objective was to study the effect of a brief intervention to help patients in medically underserved areas obtain rapid primary care follow-up appointments following hospitalization. We performed a pilot pragmatic randomized controlled trial of adult patients with uncontrolled diabetes who had been admitted to one of three hospitals in the Memphis, TN, area. The enhanced usual care arm received a list of primary care clinics, whereas the intervention group had an appointment made for them preceding their index discharge. Patients in both groups were evaluated for primary care appointment attendance within seven and fourteen days of index discharge. In addition, we examined barriers patients encounter to receiving rapid primary care follow-up using a secret shopper approach to assess wait times when calling primary care offices. Twelve patients were enrolled with six in each trial arm. Baseline demographics, access to medical care, and health literacy were similar across the groups. Primary care follow-up was also similar across the groups; no improvements in follow-up rates were seen in the group receiving assistance with making appointments. Identified barriers to making primary care follow-up appointments included inability to schedule an urgent appointment, long hold times when calling doctor’s offices and lack of transportation. Additionally, hold times when calling primary care offices were found to be excessively long in the medically underserved areas studied. The study demonstrates the feasibility of providing patient assistance with scheduling rapid primary care follow-up appointments at the time of discharge and the potential to improve care transitions and access to primary care among patients living in medically underserved areas. Larger pragmatic trials are needed to further test alternative approaches for insuring rapid primary care follow-up in vulnerable patients with ambulatory care-sensitive chronic conditions.

Journal ArticleDOI
TL;DR: In this paper , the authors conducted a retrospective federated cohort meta-analysis of 2015-2018 data from four large practice-based research networks in the Southern U.S. among adult patients with obesity and one more additional diagnosed OCC.
Abstract: Obesity-associated chronic conditions (OCC) are prevalent in medically underserved areas of the Southern US. Continuity of care with a primary care provider is associated with reduced preventable healthcare utilization, yet little is known regarding the impact of continuity of care among populations with OCC. This study aimed to examine whether continuity of care protects patients living with OCC and the subgroup with type 2 diabetes (OCC+T2D) from emergency department (ED) and hospitalizations, and whether these effects are modified by race and patient residence in health professional shortage areas (HPSA) We conducted a retrospective federated cohort meta-analysis of 2015–2018 data from four large practice-based research networks in the Southern U.S. among adult patients with obesity and one more more additional diagnosed OCC. The outcomes included overall and preventable ED visits and hospitalizations. Continuity of care was assessed at the clinic-level using the Bice-Boxerman Continuity of Care Index A total of 111,437 patients with OCC and 47,071 patients with OCC+T2D from the four large practice-based research networks in the South were included in the meta-analysis. Continuity of Care index varied among sites from a mean (SD) of 0.6 (0.4) to 0.9 (0.2). Meta-analysis demonstrated that, regardless of race or residence in HPSA, continuity of care significantly protected OCC patients from preventable ED visits (IRR:0.95; CI:0.92–0.98) and protected OCC+T2D patients from overall ED visits (IRR:0.92; CI:0.85–0.99), preventable ED visits (IRR:0.95; CI:0.91–0.99), and overall hospitalizations (IRR:0.96; CI:0.93–0.98) Improving continuity of care may reduce ED and hospital use for patients with OCC and particularly those with OCC+T2D.