Home

Research Interests

My research interests lie at the intersection of artificial intelligence, machine learning and statistics. I am particularly interested in hierarchical graphical models and approximate inference/learning techniques including Markov Chain Monte Carlo and variational Bayesian methods. My current research has a particular emphasis on models and algorithms for multivariate time series data. Thanks to recent awards from NSF and NIH, my current applied work is focusing on machine learning-based analytics for clinical and mobile health (mHealth) data. In the past, I have worked on a broad range of applications including collaborative filtering and ranking, unsupervised structure discovery and feature induction, object recognition and image labeling, and natural language processing, and I continue to consult on projects in these areas.

Recent Funded Projects

[2014-2018] Center of Excellence for Mobile Sensor Data to Knowledge (with Santosh Kumar, U. Memphis, PI). See center website.

[2014-2019]. NSF CAREER: Machine Learning for Complex Health Data Analytics.

[2013-2016] Accurate and Computationally Efficient Predictors of Java Memory Resource Consumption (with Eliot Moss, PI).

[2012-2015]  SensEye: An Architecture for Ubiquitous, Real-Time Visual Context Sensing and Inference (with Deepak Ganesan, PI).

[2012-2015]  Patient Experience Recommender System for Persuasive Communication Tailoring (with Tom Houston, UMMS, PI).

[2012-2014] Foresight and Understanding from Scientific Exposition (With Andrew McCallum, PI and Raytheon BBN Technologies)

Recent Publications

Li, Steven Cheng-Xian, and Benjmain M. Marlin. "Classification of Sparse and Irregularly Sampled Time Series with Mixtures of Expected Gaussian Kernels and Random Features." 31st Conference on Uncertainty in Artificial Intelligence. 2015. Abstract

This paper presents a kernel-based framework for classification of sparse and irregularly sampled time series. The properties of such time series can result in substantial uncertainty about the values of the underlying temporal processes, while making the data difficult to deal with using standard classification methods that assume fixed-dimensional feature spaces. To address these challenges, we propose to first re-represent each time series through the Gaussian process (GP) posterior it induces under a GP regression model. We then define kernels over the space of GP posteriors and apply standard kernel-based classification. Our primary contributions are (i) the development of a kernel between GPs based on the mixture of kernels between their finite marginals, (ii) the development and analysis of extensions of random Fourier features for scaling the proposed kernel to large-scale data, and (iii) an extensive empirical analysis of both the classification performance and scalability of our proposed approach.

Huang, Haibin, Evangelos Kalogerakis, and Benjamin Marlin. "Analysis and synthesis of 3D shape families via deep-learned generative models of surfaces." Symposium on Geometry Processing. 2015. Abstract

We present a method for joint analysis and synthesis of geometrically diverse 3D shape families. Our method first learns part-based templates such that an optimal set of fuzzy point and part correspondences is computed between the shapes of an input collection based on a probabilistic deformation model. In contrast to previous template-based approaches, the geometry and deformation parameters of our part-based templates are learned from scratch. Based on the estimated shape correspondence, our method also learns a probabilistic generative model that hierarchically captures statistical relationships of corresponding surface point positions and parts as well as their existence in the input shapes. A deep learning procedure is used to capture these hierarchical relationships. The resulting generative model is used to produce control point arrangements that drive shape synthesis by combining and deforming parts from the input collection. The generative model also yields compact shape descriptors that are used to perform fine-grained classification. Finally, it can be also coupled with the probabilistic deformation model to further improve shape correspondence. We provide qualitative and quantitative evaluations of our method for shape correspondence, segmentation, fine-grained classification and synthesis. Our experiments demonstrate superior correspondence and segmentation results than previous state-of-the-art approaches.

Mayberry, Addison, Yamin Tun, Pan Hu, Duncan SmithFreedman, Deepak Ganesan, Benjamin Marlin, and Christopher Salthouse. "CIDER: Enabling RobustnessPower Tradeoffs on a Computational Eyeglass." 21st Annual International Conference on Mobile Computing and Networking. 2015. Abstract

The human eye offers a fascinating window into an individual's health, cognitive attention, and decision making, but we lack the ability to continually measure these parameters in the natural environment. The challenges lie in: a) handling the complexity of continuous high-rate sensing from a camera and processing the image stream to estimate eye parameters, and b) dealing with the wide variability in illumination conditions in the natural environment. This paper explores the power--robustness tradeoffs inherent in the design of a wearable eye tracker, and proposes a novel staged architecture that enables graceful adaptation across the spectrum of real-world illumination. We propose, a system that operates in a highly optimized low-power mode under indoor settings by using a fast Search-Refine controller to track the eye, but detects when the environment switches to more challenging outdoor sunlight and switches models to operate robustly under this condition. Our design is holistic and tackles a) power consumption in digitizing pixels, estimating pupillary parameters, and illuminating the eye via near-infrared, b) error in estimating pupil center and pupil dilation, and c) model training procedures that involve zero effort from a user. We demonstrate that the system can estimate pupil center with error less than two pixels, and pupil diameter with error of one pixel (0.22mm). Our end-to-end results show that we can operate at power levels of roughly 7mW at a 4Hz eye tracking rate, or roughly 32mW at rates upwards of 250Hz.

Saleheen, Nazir, Amin Ali, Syed Monowar Hossain, Hillol Sarker, Soujanya Chatterjee, Benjamin Marlin, Emre Ertin, Mustafa al'Absi, and Santosh Kumar. "puffMarker : A Multi-Sensor Approach for Pinpointing the Timing of First Lapse in Smoking Cessation." 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing. 2015. Abstract

Smoking is the leading cause of preventable deaths. Mobile technologies can help to deliver just-in-time-interventions to abstinent smokers and assist them in resisting urges to lapse. Doing so, however, it requires identification of high-risk situations that may lead an abstinent smoker to relapse. In this paper, we propose an explainable model for detecting smoking lapses in newly abstinent smokers using respiration and 6-axis inertial sensors worn on wrists. We propose a novel method by identifying windows of data that represent the hand at the mouth. We then develop a model to classify into puff or non-puff. On the training data, the model achieves a recall rate of 98%, for a FP rate of 1.5%. When the model is applied to the data collected from 13 abstainers, the false positive rate is 0.3/hour. Among 15 lapsers, the model is able to pinpoint the timing of first lapse in 13 participants.

Mayberry, Addison, Pan Hu, Benjamin Marlin, Christopher Salthouse, and Deepak Ganesan. "iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker." 12th International Conference on Mobile Systems, Applications, and Services. 2014. Abstractishadow_mobisys14.pdf

Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees.

Kae, Andrew, Erik Learned-Miller, and Benjamin M. Marlin The Shape-Time Random Field for Semantic Video Labeling. 2014 IEEE Conference on Computer Vision and Pattern Recognition., 2014. Abstractstrf_cvpr14.pdf

We propose a novel discriminative model for semantic labeling in videos by incorporating a prior to model both the shape and temporal dependencies of an object in video. A typical approach for this task is the conditional random field (CRF), which can model local interactions among adjacent regions in a video frame. Recent work [16, 14] has shown how to incorporate a shape prior into a CRF for improving labeling performance, but it may be difficult to model temporal dependencies present in video by using this prior. The conditional restricted Boltzmann machine (CRBM) can model both shape and temporal dependencies, and has been used to learn walking styles from motion- capture data. In this work, we incorporate a CRBM prior into a CRF framework and present a new state-of-the-art model for the task of semantic labeling in videos. In particular, we explore the task of labeling parts of complex face scenes from videos in the YouTube Faces Database (YFDB). Our combined model outperforms competitive baselines both qualitatively and quantitatively.