scispace - formally typeset
Search or ask a question
Author

Ifeoma Nwogu

Bio: Ifeoma Nwogu is an academic researcher from Rochester Institute of Technology. The author has contributed to research in topics: Biometrics & Bayesian inference. The author has an hindex of 10, co-authored 58 publications receiving 349 citations. Previous affiliations of Ifeoma Nwogu include State University of New York System & University of Rochester.


Papers
More filters
Journal ArticleDOI
TL;DR: The use of technology is not pervasive in the continuum of stroke rehabilitation and physical and occupational therapists should consider using technology in stroke rehabilitation to better meet the needs of the patient.
Abstract: Purpose: With the patient care experience being a healthcare priority, it is concerning that patients with stroke reported boredom and a desire for greater fostering of autonomy, when evaluating their rehabilitation experience. Technology has the potential to reduce these shortcomings by engaging patients through entertainment and objective feedback. Providing objective feedback has resulted in improved outcomes and may assist the patient in learning how to self-manage rehabilitation. Our goal was to examine the extent to which physical and occupational therapists use technology in clinical stroke rehabilitation home exercise programs.Materials and methods: Surveys were sent via mail, email and online postings to over 500 therapists, 107 responded.Results: Conventional equipment such as stopwatches are more frequently used compared to newer technology like Wii and Kinect games. Still, less than 25% of therapists’ report using a stopwatch five or more times per week. Notably, feedback to patients i...

57 citations

Book ChapterDOI
TL;DR: In this article, a generative model harnesses both the temporal ordering power of dynamic Bayesian networks such as hidden Markov models (HMMs) and the automatic clustering power of hierarchical Bayesian models such as the latent Dirichlet allocation (LDA) model.
Abstract: We present language-motivated approaches to detecting, localizing and classifying activities and gestures in videos. In order to obtain statistical insight into the underlying patterns of motions in activities, we develop a dynamic, hierarchical Bayesian model which connects low-level visual features in videos with poses, motion patterns and classes of activities. This process is somewhat analogous to the method of detecting topics or categories from documents based on the word content of the documents, except that our documents are dynamic. The proposed generative model harnesses both the temporal ordering power of dynamic Bayesian networks such as hidden Markov models (HMMs) and the automatic clustering power of hierarchical Bayesian models such as the latent Dirichlet allocation (LDA) model. We also introduce a probabilistic framework for detecting and localizing pre-specified activities (or gestures) in a video sequence, analogous to the use of filler models for keyword detection in speech processing. We demonstrate the robustness of our classification model and our spotting framework by recognizing activities in unconstrained real-life video sequences and by spotting gestures via a one-shot-learning approach.

41 citations

Proceedings ArticleDOI
21 Mar 2011
TL;DR: An automated framework which detects deceit by measuring the deviation from normal behavior, at a critical point in the course of an investigative interrogation, strongly suggests that the latent parameters of eye movements successfully capture behavioral changes and could be viable for use in automated deceit detection.
Abstract: Inspired by the the behavioral scientific discoveries of Dr. Paul Ekman in relation to deceit detection, along with the television drama series Lie to Me, also based on Dr. Ekman's work, we use machine learning techniques to study the underlying phenomena expressed when a person tells a lie. We build an automated framework which detects deceit by measuring the deviation from normal behavior, at a critical point in the course of an investigative interrogation. Behavioral psychologists have shown that the eyes (via either gaze aversion or gaze extension) can be good “reflectors” of the inner emotions, when a person tells a high-stake lie. Hence we develop our deceit detection framework around eye movement changes. A dynamic bayesian model of eye movements is trained during a normal course of conversation for each subject, to represent normal behavior. The remaining conversation is broken into sequences and each sequence is tested against the parameters of the model of normal behavior. At the critical points in the interrogations, the deviations from normalcy are observed and used to deduce verity/deceit. An analysis on 40 subjects gave an accuracy of 82.5% which strongly suggests that the latent parameters of eye movements successfully capture behavioral changes and could be viable for use in automated deceit detection.

36 citations

Posted Content
TL;DR: Joint training of deep autoencoders is investigated and it is found that the usage of regularizations in the joint training scheme is crucial in achieving good performance, and in the supervised setting, joint training also shows superior performance when training deeper models.
Abstract: Traditionally, when generative models of data are developed via deep architectures, greedy layer-wise pre-training is employed. In a well-trained model, the lower layer of the architecture models the data distribution conditional upon the hidden variables, while the higher layers model the hidden distribution prior. But due to the greedy scheme of the layerwise training technique, the parameters of lower layers are fixed when training higher layers. This makes it extremely challenging for the model to learn the hidden distribution prior, which in turn leads to a suboptimal model for the data distribution. We therefore investigate joint training of deep autoencoders, where the architecture is viewed as one stack of two or more single-layer autoencoders. A single global reconstruction objective is jointly optimized, such that the objective for the single autoencoders at each layer acts as a local, layer-level regularizer. We empirically evaluate the performance of this joint training scheme and observe that it not only learns a better data model, but also learns better higher layer representations, which highlights its potential for unsupervised feature learning. In addition, we find that the usage of regularizations in the joint training scheme is crucial in achieving good performance. In the supervised setting, joint training also shows superior performance when training deeper models. The joint training framework can thus provide a platform for investigating more efficient usage of different types of regularizers, especially in light of the growing volumes of available unlabeled data.

33 citations

Proceedings ArticleDOI
08 Oct 2015
TL;DR: This work presents a model that uses text mining and topic modeling to detect malware, based on the types of API call sequences, and recommends Decision Tree as it yields `if-then' rules, which could be used as an early warning expert system.
Abstract: Dissemination of malicious code, also known as malware, poses severe challenges to cyber security Malware authors embed software in seemingly innocuous executables, unknown to a user The malware subsequently interacts with security-critical OS resources on the host system or network, in order to destroy their information or to gather sensitive information such as passwords and credit card numbers Malware authors typically use Application Programming Interface (API) calls to perpetrate these crimes We present a model that uses text mining and topic modeling to detect malware, based on the types of API call sequences We evaluated our technique on two publicly available datasets We observed that Decision Tree and Support Vector Machine yielded significant results We performed t-test with respect to sensitivity for the two models and found that statistically there is no significant difference between these models We recommend Decision Tree as it yields ‘if-then’ rules, which could be used as an early warning expert system

31 citations


Cited by
More filters
01 Jan 2006

3,012 citations

Journal ArticleDOI
TL;DR: The state-of-the-art in deep learning algorithms in computer vision is reviewed by highlighting the contributions and challenges from over 210 recent research papers, and the future trends and challenges in designing and training deep neural networks are summarized.

1,733 citations

01 Jan 2011
TL;DR: The study concludes that understanding lags first requires agreeing models, definitions and measures, which can be applied in practice, and a second task would be to develop a process by which to gather these data.
Abstract: This study aimed to review the literature describing and quantifying time lags in the health research translation process. Papers were included in the review if they quantified time lags in the development of health interventions. The study identified 23 papers. Few were comparable as different studies use different measures, of different things, at different time points. We concluded that the current state of knowledge of time lags is of limited use to those responsible for R&D and knowledge transfer who face difficulties in knowing what they should or can do to reduce time lags. This effectively ‘blindfolds’ investment decisions and risks wasting effort. The study concludes that understanding lags first requires agreeing models, definitions and measures, which can be applied in practice. A second task would be to develop a process by which to gather these data.

1,429 citations

Journal ArticleDOI
TL;DR: The results suggest that this is due to the ability of HMOG features to capture distinctive body movements caused by walking, in addition to the hand-movement dynamics from taps.
Abstract: We introduce hand movement, orientation, and grasp (HMOG), a set of behavioral features to continuously authenticate smartphone users. HMOG features unobtrusively capture subtle micro-movement and orientation dynamics resulting from how a user grasps, holds, and taps on the smartphone. We evaluated authentication and biometric key generation (BKG) performance of HMOG features on data collected from 100 subjects typing on a virtual keyboard. Data were collected under two conditions: 1) sitting and 2) walking. We achieved authentication equal error rates (EERs) as low as 7.16% (walking) and 10.05% (sitting) when we combined HMOG, tap, and keystroke features. We performed experiments to investigate why HMOG features perform well during walking. Our results suggest that this is due to the ability of HMOG features to capture distinctive body movements caused by walking, in addition to the hand-movement dynamics from taps. With BKG, we achieved the EERs of 15.1% using HMOG combined with taps. In comparison, BKG using tap, key hold, and swipe features had EERs between 25.7% and 34.2%. We also analyzed the energy consumption of HMOG feature extraction and computation. Our analysis shows that HMOG features extracted at a 16-Hz sensor sampling rate incurred a minor overhead of 7.9% without sacrificing authentication accuracy. Two points distinguish our work from current literature: 1) we present the results of a comprehensive evaluation of three types of features (HMOG, keystroke, and tap) and their combinations under the same experimental conditions and 2) we analyze the features from three perspectives (authentication, BKG, and energy consumption on smartphones).

319 citations