Publications

Export 51 results:
Sort by: [ Author  (Asc)] Type
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 
A
Adams, Roy J., and Benjamin M. Marlin. "Learning Time Series Detection Models from Temporally Imprecise Labels." The 20th International Conference on Artificial Intelligence and Statistics. 2017. Abstractadams17a.pdf

In this paper, we consider a new low-quality label learning problem: learning time series detection models from temporally imprecise labels. In this problem, the data consist of a set of input time series, and supervision is provided by a sequence of noisy time stamps corresponding to the occurrence of positive class events. Such temporally imprecise labels commonly occur in areas like mobile health research where human annotators are tasked with labeling the occurrence of very short duration events. We propose a general learning framework for this problem that can accommodate different base classifiers and noise models. We present results on real mobile health data showing that the proposed framework significantly outperforms a number of alternatives including assuming that the label time stamps are noise-free, transforming the problem into the multiple instance learning framework, and learning on labels that were manually re-aligned.

Adams, Roy, Nazir Saleheen, Edison Thomaz, Abhinav Parate, Santosh Kumar, and Benjamin Marlin. "Hierarchical Span-Based Conditional Random Fields for Labeling and Segmenting Events in Wearable Sensor Data Streams." International Conference on Machine Learning. 2016. Abstracticml2016_hns.pdf

The field of mobile health (mHealth) has the potential to yield new insights into health and behavior through the analysis of continuously recorded data from wearable health and activity sensors. In this paper, we present a hierarchical span-based conditional random field model for the key problem of jointly detecting discrete events in such sensor data streams and segmenting these events into high-level activity sessions. Our model includes higher-order cardinality factors and inter-event duration factors to capture domain-specific structure in the label space. We show that our model supports exact MAP inference in quadratic time via dynamic programming, which we leverage to perform learning in the structured support vector machine framework. We apply the model to the problems of smoking and eating detection using four real data sets. Our results show statistically significant improvements in segmentation performance at the p=0.005 level relative to a hierarchical pairwise CRF.

Adams, Roy J., Edison Thomaz, and Benjamin M. Marlin. "Hierarchical Nested CRFs for Segmentation and Labeling of Physiological Time Series." NIPS Workshop on Machine Learning in Healthcare. 2015. Abstractadams-nips-heath2015.pdf

In this paper, we address the problem of nested hierarchical segmentation
and labeling of time series data. We present a hierarchical
span-based conditional random field framework for this problem that
leverages higher-order factors to enforce the nesting constraints. The framework can
incorporate a variety of additional factors including higher order cardinality
factors. This research is motivated by hierarchical activity recognition problems
in the field of mobile Health (mHealth). We show that the specific model of interest in the mHealth setting supports exact MAP inference in quadratic time. Learning is accomplished in the structured support vector machine framework. We show positive results on real and synthetic data sets.

Adams, Roy J., Rajani S. Sadasivam, Kavitha Balakrishnan, Rebecca L. Kinney, Thomas K. Houston, and Benjamin M. Marlin. "PERSPeCT: Collaborative Filtering for Tailored Health Communications." Proceedings of the 8th ACM Conference on Recommender Systems. RecSys '14. New York, NY, USA: ACM, 2014. 329-332. Abstractperspect-recsys14.pdf

n/a

The goal of computer tailored health communications (CTHC) is to elicit healthy behavior changes by sending motivational messages personalized to individual patients. One prominent weakness of many existing CTHC systems is that they are based on expert-written rules and thus have no ability to learn from their users over time. One solution to this problem is to develop CTHC systems based on the principles of collaborative filtering, but this approach has not been widely studied. In this paper, we present a case study evaluating nine rating prediction methods for use in the Patient Experience Recommender System for Persuasive Communication Tailoring, a system developed for use in a clinical trial of CTHC-based smoking cessation support interventions.

B
Boutilier, Craig, Richard S. Zemel, and Benjamin M. Marlin. "Active Collaborative Filtering." UAI. 2003. 98-106. Abstract

Collaborative filtering (CF) allows the preferences of multiple users to be pooled to make recommendations regarding unseen products. We consider in this paper the problem of online and interactive CF: given the current ratings associated with a user, what queries (new ratings) would most improve the quality of the recommendations made? We cast this terms of expected value of information (EVOI); but the online computational cost of computing optimal queries is prohibitive. We show how offline prototyping and computation of bounds on EVOI can be used to dramatically reduce the required online computation. The framework we develop is general, but we focus on derivations and empirical study in the specific case of the multiple-cause vector quantization model.

C
Chiu, Meng-Chieh, Benjamin Marlin, and Eliot Moss. "Real-Time Program-Specific Phase Change Detection for Java Programs." 13th International Conference on Principles and Practices of Programming on the Java Platform: Virtual Machines, Languages, and Tools. 2016. Abstracta12-chiu.pdf

It is well-known that programs tend to have multiple phases in their execution. Because phases have impact on micro-architectural features such as caches and branch predictors, they are relevant to program performance and energy consumption. They are also relevant to detecting whether a program is executing as expected or is encountering unusual or exceptional conditions, a software engineering and program monitoring concern. We offer here a method for real-time phase change detection in Java programs. After applying a training protocol to a program of interest, our method can detect phase changes at run time for that program with good precision and recall (compared with a “ground truth” definition of phases) and with small performance impact (average less than 2%). We also offer improved methodology for evaluating phase change detection mechanisms. In sum, our approach offers the first known implementation of real-time phase detection for Java programs.

D
Dadkhahi, Hamid, Nazir Saleheen, Santosh Kumar, and Benjamin Marlin. "Learning Shallow Detection Cascades for Wearable Sensor-Based Mobile Health Applications." ICML On Device Intelligence Workshop. 2016. Abstractdadkhahi-icml-odi2017.pdf

The field of mobile health aims to leverage recent advances in wearable on-body sensing technology and smart phone computing capabilities to develop systems that can monitor health states and deliver just-in-time adaptive interventions. However, existing work has largely focused on analyzing collected data in the off-line setting. In this paper, we propose a novel approach to learning shallow detection cascades developed explicitly for use in a real-time wearable-phone or wearable-phone-cloud systems. We apply our approach to the problem of cigarette smoking detection from a combination of wrist-worn actigraphy data and respiration chest band data using two and three stage cascades.

Dadkhahi, Hamid, and Benjamin Marlin Learning Tree-Structured Detection Cascades for Heterogeneous Networks of Embedded Devices. 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining., 2017. Abstractfp0911-dadkhahia.pdf

To appear.

In this paper, we present a new approach to learning cascaded classifiers for use in computing environments that involve networks of heterogeneous and resource-constrained, low-power embedded compute and sensing nodes. We present a generalization of the classical linear detection cascade to the case of tree-structured cascades where different branches of the tree execute on different physical compute nodes in the network. Different nodes have access to different features, as well as access to potentially different computation and energy resources. We concentrate on the problem of jointly learning the parameters for all of the classifiers in the cascade given a fixed cascade architecture and a known set of costs required to carry out the computation at each node. To accomplish the objective of joint learning of all detectors, we propose a novel approach to combining classifier outputs during training that better matches the hard cascade setting in which the learned system will be deployed. This work is motivated by research in the area of mobile health where energy efficient real time detectors integrating information from multiple wireless on-body sensors and a smart phone are needed for real-time monitoring and the delivery of just-in-time adaptive interventions. We evaluate our framework on mobile sensor-based human activity recognition and mobile health detector learning problems.

Duvenaud, David K., Benjamin M. Marlin, and Kevin P. Murphy. "Multiscale Conditional Random Fields for Semi-supervised Labeling and Classification." CRV. 2011. 371-378. Abstractmultiscale_crv11_paper.pdf

Motivated by the abundance of images labeled only by their captions, we construct tree-structured multiscale conditional random fields capable of performing semi supervised learning. We show that such caption-only data can in fact increase pixel-level accuracy at test time. In addition, we compare two kinds of tree: the standard one with pair wise potentials, and one based on noisy-or potentials, which better matches the semantics of the recursive partitioning used to create the tree.

H
Hiatt, Laura, Roy Adams, and Benjamin Marlin. "An Improved Data Representation for Smoking Detection with Wearable Respiration Sensors." IEEE Wireless Health. 2016. hiatt-wh2016.pdf

Late breaking extended abstract.

Huang, Haibin, Evangelos Kalogerakis, and Benjamin Marlin. "Analysis and synthesis of 3D shape families via deep-learned generative models of surfaces." Symposium on Geometry Processing. 2015. Abstracthuang-sgp2015.pdf

We present a method for joint analysis and synthesis of geometrically diverse 3D shape families. Our method first learns part-based templates such that an optimal set of fuzzy point and part correspondences is computed between the shapes of an input collection based on a probabilistic deformation model. In contrast to previous template-based approaches, the geometry and deformation parameters of our part-based templates are learned from scratch. Based on the estimated shape correspondence, our method also learns a probabilistic generative model that hierarchically captures statistical relationships of corresponding surface point positions and parts as well as their existence in the input shapes. A deep learning procedure is used to capture these hierarchical relationships. The resulting generative model is used to produce control point arrangements that drive shape synthesis by combining and deforming parts from the input collection. The generative model also yields compact shape descriptors that are used to perform fine-grained classification. Finally, it can be also coupled with the probabilistic deformation model to further improve shape correspondence. We provide qualitative and quantitative evaluations of our method for shape correspondence, segmentation, fine-grained classification and synthesis. Our experiments demonstrate superior correspondence and segmentation results than previous state-of-the-art approaches.

I
Iyengar, Srinivasan, Sandeep Kalra, Anushree Ghosh, David Irwin, Prashant Shenoy, and Benjamin Marlin. "iProgram: Inferring Smart Schedules for Dumb Thermostats." 10th Annual Women in Machine Learning Workshop. 2015. Abstract

Heating, ventilation, and air conditioning (HVAC) accounts for over 50% of a typical home's energy usage. A thermostat generally controls HVAC usage in a home to ensure user comfort. In this paper, we focus on making existing "dumb" programmable thermostats smart by applying energy analytics on smart meter data to infer home occupancy patterns and compute an optimized thermostat schedule. Utilities with smart meter deployments are capable of immediately applying our approach, called iProgram, to homes across their customer base. iProgram addresses new challenges in inferring home occupancy from smart meter data where i) training data is not available and ii) the thermostat schedule may be misaligned with occupancy, frequently resulting in high power usage during unoccupied periods. iProgram translates occupancy patterns inferred from opaque smart meter data into a custom schedule for existing types of programmable thermostats, e.g., 1-day, 7-day, etc. We implement iProgram as a web service and show that it reduces the mismatch time between the occupancy pattern and the thermostat schedule by a median value of 44.28 minutes (out of 100 homes) when compared to a default 8am-6pm weekday schedule, with a median deviation of 30.76 minutes off the optimal schedule. Further, iProgram yields a daily energy saving of 0.42kWh on average across the 100 homes. Utilities may use iProgram to recommend thermostat schedules to customers and provide them estimates of potential energy savings in their energy bills.

Iyengar, Srinivasan, Sandeep Kalra, Anushree Ghosh, David Irwin, Prashant Shenoy, and Benjamin Marlin. "iProgram: Inferring Smart Schedules for Dumb Thermostats." Proceedings of the 2Nd ACM International Conference on Embedded Systems for Energy-Efficient Built Environments. BuildSys '15. New York, NY, USA: ACM, 2015. 211-220. Abstractp211-iyengar.pdf

Heating, ventilation, and air conditioning (HVAC) accounts for over 50% of a typical home's energy usage. A thermostat generally controls HVAC usage in a home to ensure user comfort. In this paper, we focus on making existing "dumb" programmable thermostats smart by applying energy analytics on smart meter data to infer home occupancy patterns and compute an optimized thermostat schedule. Utilities with smart meter deployments are capable of immediately applying our approach, called iProgram, to homes across their customer base. iProgram addresses new challenges in inferring home occupancy from smart meter data where i) training data is not available and ii) the thermostat schedule may be misaligned with occupancy, frequently resulting in high power usage during unoccupied periods. iProgram translates occupancy patterns inferred from opaque smart meter data into a custom schedule for existing types of programmable thermostats, e.g., 1-day, 7-day, etc. We implement iProgram as a web service and show that it reduces the mismatch time between the occupancy pattern and the thermostat schedule by a median value of 44.28 minutes (out of 100 homes) when compared to a default 8am-6pm weekday schedule, with a median deviation of 30.76 minutes off the optimal schedule. Further, iProgram yields a daily energy saving of 0.42kWh on average across the 100 homes. Utilities may use iProgram to recommend thermostat schedules to customers and provide them estimates of potential energy savings in their energy bills.

J
Jacek, Nicholas, Meng-Chieh Chiu, Benjamin Marlin, and Eliot J. B. Moss. "Assessing the Limits of Program-Specific Garbage Collection Performance." Programming Language Design and Implementation. 2016. Abstractp584-jacek.pdf

Distinguished Paper Award

We consider the ultimate limits of program-specific garbage collector performance for real programs. We first characterize the GC schedule optimization problem using Markov Decision Processes (MDPs). Based on this characterization, we develop a method of determining, for a given program run and heap size, an optimal schedule of collections for a non-generational collector. We further explore the limits of performance of a generational collector, where it is not feasible to search the space of schedules to prove optimality. Still, we show significant improvements with Least Squares Policy Iteration, a reinforcement learning technique for solving MDPs. We demonstrate that there is considerable promise to reduce garbage collection costs by developing program-specific collection policies.

K
Kae, Andrew, Erik Learned-Miller, and Benjamin M. Marlin The Shape-Time Random Field for Semantic Video Labeling. 2014 IEEE Conference on Computer Vision and Pattern Recognition., 2014. Abstractstrf_cvpr14.pdf

We propose a novel discriminative model for semantic labeling in videos by incorporating a prior to model both the shape and temporal dependencies of an object in video. A typical approach for this task is the conditional random field (CRF), which can model local interactions among adjacent regions in a video frame. Recent work [16, 14] has shown how to incorporate a shape prior into a CRF for improving labeling performance, but it may be difficult to model temporal dependencies present in video by using this prior. The conditional restricted Boltzmann machine (CRBM) can model both shape and temporal dependencies, and has been used to learn walking styles from motion- capture data. In this work, we incorporate a CRBM prior into a CRF framework and present a new state-of-the-art model for the task of semantic labeling in videos. In particular, we explore the task of labeling parts of complex face scenes from videos in the YouTube Faces Database (YFDB). Our combined model outperforms competitive baselines both qualitatively and quantitatively.

Khan, Mohammad Emtiyaz, Shakir Mohamed, Benjamin M. Marlin, and Kevin P. Murphy. "A Stick-Breaking Likelihood for Categorical Data Analysis with Latent Gaussian Models." AISTATS. 2012. 610-618. Abstractsblgm-aistats2012-paper.pdf

The development of accurate models and efficient algorithms for the analysis of multivariate categorical data are important and long-standing problems in machine learning and computational statistics. In this paper, we focus on modeling categorical data using Latent Gaussian Models (LGMs). We propose a novel stick-breaking likelihood function for categorical LGMs that exploits accurate linear and quadratic bounds on the logistic log-partition function, leading to an effective variational inference and learning framework. We thoroughly compare our approach to existing algorithms for multinomial logit/probit likelihoods on several problems, including inference in multinomial Gaussian process classification and learning in latent factor models. Our extensive comparisons demonstrate that our stick-breaking model effectively captures correlation in discrete data and is well suited for the analysis of categorical data.

Khan, Mohammad Emtiyaz, Benjamin M. Marlin, Guillaume Bouchard, and Kevin P. Murphy. "Variational bounds for mixed-data factor analysis." NIPS. 2010. 1108-1116. Abstract

We propose a new variational EM algorithm for fitting factor analysis models with mixed continuous and categorical observations. The algorithm is based on a simple quadratic bound to the log-sum-exp function. In the special case of fully observed binary data, the bound we propose is significantly faster than previous variational methods. We show that EM is significantly more robust in the presence of missing data compared to treating the latent factors as parameters, which is the approach used by exponential family PCA and other related matrix-factorization methods. A further benefit of the variational approach is that it can easily be extended to the case of mixtures of factor analyzers, as we show. We present results on synthetic and real data sets demonstrating several desirable properties of our proposed method.

Kumar, S., and others. "Center of excellence for mobile sensor Data-to-Knowledge (MD2K)." Journal of the American Medical Informatics Association. 22.6 (2015): 1137-1142. AbstractFull Text

Mobile sensor data-to-knowledge (MD2K) was chosen as one of 11 Big Data Centers of Excellence by the National Institutes of Health, as part of its Big Data-to-Knowledge initiative. MD2K is developing innovative tools to streamline the collection, integration, management, visualization, analysis, and interpretation of health data generated by mobile and wearable sensors. The goal of the big data solutions being developed by MD2K is to reliably quantify physical, biological, behavioral, social, and environmental factors that contribute to health and disease risk. The research conducted by MD2K is targeted at improving health through early detection of adverse health events and by facilitating prevention. MD2K will make its tools, software, and training materials widely available and will also organize workshops and seminars to encourage their use by researchers and clinicians.

L
Li, Steven Cheng-Xian, and Benjamin M. Marlin. "Collaborative Multi-Output Gaussian Processes for Collections of Sparse Multivariate Time Series,." NIPS Time Series Workshop. 2015. Abstractli-nips-ts2015.pdf

Collaborative Multi-Output Gaussian Processes (COGPs) are a flexible tool for modeling multivariate time series. They induce correlation across outputs through the use of shared latent processes. While past work has focused on the computational challenges that result from a single multivariate time series with many observed values, this paper explores the problem of fitting the COGP model to collections of many sparse and irregularly sampled multivariate time series. This work is motivated by applications to modeling physiological data (heart rate, blood pressure, etc.) in Electronic Health Records (EHRs).

Li, Steven Cheng-Xian, and Benjamin M. Marlin Classification of Sparse and Irregularly Sampled Time Series with Mixtures of Expected Gaussian Kernels and Random Features. 31st Conference on Uncertainty in Artificial Intelligence., 2015. Abstractli-uai2015.pdf

This paper presents a kernel-based framework for classification of sparse and irregularly sampled time series. The properties of such time series can result in substantial uncertainty about the values of the underlying temporal processes, while making the data difficult to deal with using standard classification methods that assume fixed-dimensional feature spaces. To address these challenges, we propose to first re-represent each time series through the Gaussian process (GP) posterior it induces under a GP regression model. We then define kernels over the space of GP posteriors and apply standard kernel-based classification. Our primary contributions are (i) the development of a kernel between GPs based on the mixture of kernels between their finite marginals, (ii) the development and analysis of extensions of random Fourier features for scaling the proposed kernel to large-scale data, and (iii) an extensive empirical analysis of both the classification performance and scalability of our proposed approach.

Li, Steven Cheng-Xian, and Benjamin M. Marlin A scalable end-to-end Gaussian process adapter for irregularly sampled time series classification. Advances in Neural Information Processing Systems., 2016. Abstractli-nips2016.pdf

We present a general framework for classification of sparse and irregularly-sampled time series. The properties of such time series can result in substantial uncertainty about the values of the underlying temporal processes, while making the data difficult to deal with using standard classification methods that assume fixed-dimensional feature spaces. To address these challenges, we propose an uncertainty-aware classification framework based on a special computational layer we refer to as the Gaussian process adapter that can connect irregularly sampled time series data to to any black-box classifier learnable using gradient descent. We show how to scale up the required computations based on combining the structured kernel interpolation framework and the Lanczos approximation method, and how to discriminatively train the Gaussian process adapter in combination with a number of classifiers end-to-end using backpropagation.

M
Marlin, Benjamin M., Roy J. Adams, Rajani Sadasivam, and Thomas K. Houston Towards Collaborative Filtering Recommender Systems for Tailored Health Communications. AMIA 2013 Annual Symposium., 2013. Abstractcthc_recsys13_paper.pdf

The goal of computer tailored health communications (CTHC) is to promote healthy behaviors by sending messages tailored to individual patients. Current CTHC systems collect baseline patient “profiles” and then use expert-written, rule-based systems to target messages to subsets of patients. Our main interest in this work is the study of collaborative filtering-based CTHC systems that can learn to tailor future message selections to individual patients based explicit feedback about past message selections. This paper reports the results of a study designed to collect explicit feedback (ratings) regarding four aspects of messages from 100 subjects in the smoking cessation support domain. Our results show that most users have positive opinions of most messages and that the ratings for all four aspects of the messages are highly correlated with each other. Finally, we conduct a range of rating prediction experiments comparing several different model variations. Our results show that predicting future ratings based on each user’s past ratings contributes the most to predictive accuracy.

Marlin, Benjamin M., Richard S. Zemel, Sam T. Roweis, and Malcolm Slaney. "Collaborative Filtering and the Missing at Random Assumption." UAI. 2007. 267-275. Abstract

Rating prediction is an important application, and a popular research topic in collaborative filtering. However, both the validity of learning algorithms, and the validity of standard testing procedures rest on the assumption that missing ratings are missing at random (MAR). In this paper we present the results of a user study in which we collect a random sample of ratings from current users of an online radio service. An analysis of the rating data collected in the study shows that the sample of random ratings has markedly different properties than ratings of user-selected songs. When asked to report on their own rating behaviour, a large number of users indicate they believe their opinion of a song does affect whether they choose to rate that song, a violation of the MAR condition. Finally, we present experimental results showing that incorporating an explicit model of the missing data mechanism can lead to significant improvements in prediction performance on the random sample of ratings.

Marlin, Benjamin M., Richard S. Zemel, Sam T. Roweis, and Malcolm Slaney. "Recommender Systems, Missing Data and Statistical Model Estimation." IJCAI. 2011. 2686-2691. Abstractmissing_data_ijcai11_paper.pdf

The goal of rating-based recommender systems is to make personalized predictions and recommendations for individual users by leveraging the preferences of a community of users with respect to a collection of items like songs or movies. Recommender systems are often based on intricate statistical models that are estimated from data sets containing a very high proportion of missing ratings. This work describes evidence of a basic incompatibility between the properties of recommender system data sets and the assumptions required for valid estimation and evaluation of statistical models in the presence of missing data. We discuss the implications of this problem and describe extended modelling and evaluation frameworks that attempt to circumvent it. We present prediction and ranking results showing that models developed and tested under these extended frameworks can significantly outperform standard models.

Marlin, Benjamin M., and Godfried T. Toussaint. "Constructing convex 3-polytopes from two triangulations of a polygon." CCCG. 2002. 36-39. Abstract

Guibas conjectured that given a convex polygon P in the xy-plane along with two triangulations of it, T1 and T2 that share no diagonals, it is always possible to assign height values to the vertices of P such that P U T1 U T 2 becomes a convex 3-polytope. Dekster found a counter example but left open the questions of deciding if a given configuration corresponds to a convex 3-polytope, and constructing such realizations when they exist. This paper gives a proof that a relaxed version of Guibas' conjecture always holds true. The question of deciding the realizability of Guibas' conjecture is characterized in terms of a linear programming problem. This leads to an algorithm for deciding and constructing such realizations that incorporates a linear programming step with O(n) inequality constraints and n variables.