Inferring Human Activity in Mobile Devices by Computing Multiple Contexts

Ruizhi Chen, Tianxing Chu, Keqiang Liu, Jingbin Liu, Yuwei Chen
2015 Sensors  
This paper introduces a framework for inferring human activities in mobile devices by computing spatial contexts, temporal contexts, spatiotemporal contexts, and user contexts. A spatial context is a significant location that is defined as a geofence, which can be a node associated with a circle, or a polygon; a temporal context contains time-related information that can be e.g., a local time tag, a time difference between geographical locations, or a timespan; a spatiotemporal context is
more » ... d as a dwelling length at a particular spatial context; and a user context includes user-related information that can be the user's mobility contexts, environmental contexts, psychological contexts or social contexts. Using the measurements of the built-in sensors and radio signals in mobile devices, we can snapshot a contextual tuple for every second including aforementioned contexts. Giving a contextual tuple, the framework evaluates the posteriori probability of each candidate activity in real-time using a Naïve Bayes classifier. A large dataset containing 710,436 contextual tuples has been recorded for one week from an experiment carried out at Texas A&M University Corpus Christi with three participants. The test results demonstrate that the multi-context solution significantly outperforms the spatial-context-only solution. A OPEN ACCESS Sensors 2015, 15 21220 classification accuracy of 61.7% is achieved for the spatial-context-only solution, while 88.8% is achieved for the multi-context solution. Introduction Can a smartphone "think"? This is an interesting research question. Location-aware and context-aware applications are now appealing to mobile users, mobile industry and scientific communities. Google now is one of the smart applications publicly available nowadays with location-aware features. It calculates and pushes relevant information automatically to the mobile users based on his/her current location [1]. The user location is the trigger of the location-aware function. Context-aware applications are more complicated than location-aware applications because we need to look at the whos, wheres, whens, and whats (what is the user doing) to understand why the context is taking place [2] . In many cases, location-aware computing is not sufficient to understand concurrent activities occurring at the same location. More contexts are required in addition to the spatial context (location information). For example, a waitress may work in a coffee shop, while a customer may take a coffee break in the same place at the same time. We are unable to distinguish the waitress' activity of "working" and the customer's activity of "taking a coffee break" based on the location (where) and the time (when). However, the user mobility contexts are distinct between the waitress and the customer. The waitress walks around the coffee shop to serve different customers, while the customer stays mostly static at a coffee table to enjoy his/her coffee break. Furthermore, the spatiotemporal contexts are different as well. The waitress dwells much longer in the coffee shop than the customer. In general, a spatial-context-only approach works well in the situation that there is only one activity associated with a significant location. However, if multiple activities may take place at the same location, a multi-context approach is needed in order to exclusively identify the actual activity. This example explains our motivation to introduce a multi-context approach for inferring human activities. Recognition of human activities plays a crucial role in smart context-aware applications. It will help the computer to understand what the user is doing under a particular circumstance. Although human activity recognition is a computationally demanding task, the rapid development of mobile computing capability in the last few years allows us to effectively achieve this goal in real time. Nowadays, the most common approaches for human activity recognition are based either on external sensing systems such as cameras in smart home environments, or on mobile sensing systems such as smartphones and wearable devices [3] . This paper introduces a framework for inferring human activities using smartphones. It is based on real-time computation of a series of contextual tuples. Each contextual tuple consists of: (1) A spatial context, which is a geofence that can be a node associated with a circle, or a polygon; (2) A temporal context, which can be the local time, a time difference between two geographical locations, or a timespan;
doi:10.3390/s150921219 pmid:26343665 pmcid:PMC4610464 fatcat:ocvemk44mrcfpbcgrqck647t3e