Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Intelligent adaptive automation: activity-driven and user-centered building automation
(USC Thesis Other)
Intelligent adaptive automation: activity-driven and user-centered building automation
PDF
Download
Share
Open document
Flip pages
Copy asset link
Request this asset
Transcript (if available)
Content
1
Intelligent Adaptive Automation: Activity-Driven
and User-Centered Building Automation
by
Simin Ahmadi Karvigh
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(CIVIL ENGINEERING)
December, 2018
2
Executive summary
Buildings account for more than 40% of total energy consumption and 70% of the total electricity
consumption in the United States [1]. Among the energy consuming service systems in buildings,
lighting systems and appliances (including computers, office equipment, televisions, other
electronic devices, clothes washers, dryers, dishwashers, and cooking appliances) together
contribute to more than half of the electricity consumption in residential and commercial buildings
(i.e., 55% in residential buildings [2] and 51% in commercial buildings [3]). In addition,
miscellaneous Electric Loads (MELs), including both plug loads and hard-wired loads, are
projected to increase from 6.1 to 6.9 Quads (13% growth) in residential buildings, and from 6.5 to
8.3 Quads (27%) growth in commercial buildings between 2016 and 2030 [1]. While non-MELs
building loads are projected to decrease, the share of building energy consumption associated with
MELs is projected to increase significantly between 2016 and 2030.
Significant contribution of lighting systems and appliances in total electricity consumption of
buildings has ignited a growing worldwide interest to find strategies to improve energy efficiency
of these service systems. Overall, these strategies follow two major approaches: (1) encouraging
occupants to change their wasteful behavior by making them aware of their energy consumption
behavior and potential energy savings; and (2) controlling the operation of service systems to be
more energy efficient using automation.
An increasing amount of work supports the idea that awareness of personalized and detailed energy
consumption could assist occupants to reduce their consumption [4]. In response to this, various
techniques have been successfully used to measure electricity consumption down to the device
(i.e., lighting fixture and appliance) level [5–7]. With the aid of disaggregated electricity
consumption data, occupants are able to distinguish inefficient devices and hence discover possible
savings that could be achieved by substituting inefficient devices with more efficient ones.
However, deeper investigations of energy consumption in buildings have revealed that efficiency
of energy consuming devices is not the only factor involved. In fact, as shown by several studies,
occupant behavior in operating these service systems have also significant impacts on buildings’
energy consumption and hence building controls [8]. Although occupants’ awareness of their
energy consuming behaviors could result in a remarkable increase in buildings’ energy efficiency,
the savings depend on the conscious actions and behavior change of occupants, which is not always
3
aligned with occupants’ convenience. Thus, researchers have also focused on approaches to
automate the operation of service systems in buildings to be more energy efficient without
requiring a behavior change in occupants [9].
In the context of improving energy efficiency of lighting systems and appliances, occupant
behavior can be defined as the ways the service systems are used by occupants during their
activities. Accordingly, recognizing activities could be a venue to obtain the required insight on
occupant behavior. Activity recognition has long been used in healthcare, however, its application
in building energy management has not been fully explored yet. With activity recognition
techniques, activities are detected either online, using real time data, or offline, using historical
data [10]. While offline activity recognition provides the knowledge on occupants’ behavior to
give energy consumption feedback, for the sake of using activity recognition to develop a user-
centered automation system in buildings, activities must be recognized in real time, so that the
appropriate automation action could be executed based on an occupant’s current activities. By
enhancing activity recognition with specific contextual information, it is possible to detect an
occupant’s wasteful behavior. The insight on wasteful behavior not only is needed for generating
automation rules, but it could also assist occupants to make more informed decisions with respect
to the potential savings.
In a complex system (e.g., a building) with various functions to automate, in order to enhance
automation adoptability, an automation system should be designed in a manner that could offer
automation in various levels to be matched with the functions. Moreover, rather than applying the
same level of automation on all functions in the system, each function should be automated in a
certain level. This is called adjustable autonomy [11]. Along this line, the procedure for automation
design should involve determination of the level of automation needed for each of the functions in
the system. Human response to automation is affected by the function’s workload, which is related
to the context of the function, and also individual’s characteristics, which raise differences due to
the variations in interpersonal trust. All these factors together contribute to formation of the
preference for automation level in users, based on which the appropriate level of automation for
different functions could be determined. With the purpose of incorporating automation preferences
into the process of automation design, noticeable investigations on automation preferences have
been conducted in several domains, such as aircraft control [12–15]. However, there still exists a
4
gap for understanding the variation of automation preferences among individuals in the application
of building energy efficiency, which requires more investigation.
As studies have shown, dynamicity is an essential property that needs to be considered to achieve
energy efficiency in buildings [4]. In other words, static assumptions should be avoided in
controlling the operation of service systems in buildings. Accordingly, in addition to the context
and subjective characteristics, time should be considered as a dimension by which automation
preferences could change. In response to the need for dynamicity in automation, researchers have
proposed the concept of progressive autonomy, where the level of automation can change in time
as occupant’s trust in automation increases (progressive autonomy) [16]. An automation system
that is designed based on both adjustable and progressive autonomy could be a more powerful
approach to improve user convenience while increasing a building’s energy efficiency.
Emphasizing on the preferences, such an automation system can fully or partially control
appliances and lighting systems in buildings based on a set of dynamic rules that are generated
with regards to the insight from occupant’s wasteful behavior achieved through activity
recognition. Incorporating activities and automation preferences as individual characteristics into
the design process (i.e., activity-driven and user-centered automation design) could eliminate the
drawbacks of “one size fits all” design paradigm, which is currently the main automation design
paradigm for appliances and lighting system control.
The above mentioned challenges can be summarized into the following research objectives of this
thesis: (1) To achieve insights about the effects of occupant’s activities on electricity consumption
of appliances and lighting systems in buildings; (2) To reduce appliances and lighting system
related electricity consumption in buildings by providing an activity-driven and user-centered
automation system that adapts to changes in occupant’s automation preferences.
This thesis begins with introducing a novel framework to allocate personalized appliance-level
disaggregated electricity consumption to daily activities, using offline activity recognition
(Chapter 6). Our presented framework consists of three main parts: context-aware data separation,
segmentation and activity recognition, and electricity consumption estimation. In order to separate
the overlapping activities, we introduced an ontology-based approach, based on which the input
data (i.e., electricity usage) is separated into categories with regards to the context information.
Then the separated datasets are segmented into active and inactive segments. Next, the active
5
segments are mapped into activities. Finally, the associated electricity consumption of detected
activities is estimated. In order to evaluate our framework, an experimental validation in three
single occupancy apartment units was carried out. The experimental results showed a total F-
measure value of 0.97 for segmentation and an average accuracy of 93.41% for activity
recognition. Following the detection of activities, the approximate electricity consumption
associated with each activity was estimated. The differences among contribution of appliances in
electricity consumption of investigated activities provided us with insights about different
behaviors in performing daily activities in tested units.
In Chapter 7, we continue our explorations on activity recognition by introducing an unsupervised
framework for real-time activity recognition and wasted electricity cost and energy consumption
detection, using a combination of inductive and deductive reasoning to eliminate the need for
collecting labeled activity data for training while achieving a high performance. Our proposed
framework consists of three sub-algorithms: action detection, activity recognition and waste
estimation. In our framework, actions are the particular changes that are executed either by
occupants or by artifacts in the environment. Combinations of different actions create the activities.
As the real-time input, the action detection algorithm receives the data from the sensing system to
detect the occurred actions using unsupervised machine learning algorithms (i.e., Expectation
Maximization and Principal Components Analysis) to detect actions. Detected actions are then
used by the activity recognition algorithm to recognize the activities through semantic reasoning
on our constructed ontology, which contains activities along with additional contextual
information modeled as concepts and their relations. For a given appliance, based on the
recognized activities and waste estimation policies that are applicable, the waste estimation
algorithm determines whether the current consumption of the appliance is considered as waste and
accordingly it estimates the potential savings. To evaluate the performance of our framework, three
experiments were carried out, during two weeks, in a testbed office with five occupants and two
single-occupancy apartments, in which the performance of the action detection and activity
recognition was evaluated using the ground truth labels for actions and activities. Results showed
average accuracy of 96.8% for action detection and 97.6% for activity recognition. In addition, the
results from the waste estimation showed that an average of 35.5% of the consumption of an
appliance or lighting system in the testbeds could be potentially reduced.
6
In order to address part of our second objective, in Chapter 8, we present our investigation on
occupant’s automation preferences in different contexts for lighting system and appliances control,
in buildings. The contexts that we focused in our study include rescheduling an energy consuming
activity (e.g., using a dishwasher), management of different appliance states with regards to
occupant activities (e.g., working with a computer), and occupancy-based control of lighting
systems. For each context, we defined four levels of automation, including full automation,
inquisitive automation (asks user approval before taking an action), adaptive automation (learns
user’s pattern and acts accordingly) and no automation. We carried out a survey of 250 responders
using Amazon Mechanical Turk to determine how the automation preferences vary by personality
and demographic-related characteristics, including big five personality traits [17] (i.e.,
extraversion, agreeableness, conscientiousness, neuroticism and openness to experience) and age,
gender, marital status, education level and income level in residential buildings. We analyzed the
collected data, using descriptive and inferential analysis, using statistical techniques and we built
a model, using Generalized Linear Mixed Model. Our findings indicate that automation
preferences vary by context, such that for rescheduling an activity, inquisitive automation is the
most likely preferred option, whereas for managing appliance standby power, adaptive automation
is the option with the highest probability of being preferred and for turning off the unneeded left
on appliances and lights, the preference is full automation. Based on our findings, from
demographic-related variables education and income levels’ effect are, respectively, marginally
significant and significant on automation preference. From personality traits, the effects of
agreeableness, neuroticism and openness to experience, are significant. Finally, our investigation
shows that in all contexts no automation is the least preferred option.
With regards to the remaining parts of our second objective, in Chapter 9, we introduce an activity-
driven and user-centered building automation approach to improve the energy efficiency of
appliances and lighting systems in buildings considering occupants’ preferences and their
dynamics. Our proposed automation fully or partially controls the service systems in buildings
based on a set of dynamic rules that are generated with the insight about user’s preferences and
activities. The algorithmic components of our proposed automation include (1) dynamic command
planning, (2) adaptive local learning, and (3) iterative global learning. In order to evaluate these
algorithms, we used a combination of real and synesthetic user activity and preference data from
an office with five occupants and an apartment with one occupant. Based on our results from
7
evaluation of adaptive local learning, after a certain number of days (i.e., 8.5 days in average) the
accuracy of predicting participants’ preference reached to an acceptable value (i.e., above 85%).
About 24% to 75%, 5% to 45%, and 6% to 49% of the total daily energy consumption of the
participants could be saved using full automation, adaptive automation and inquisitive automation,
respectively. Our results for evaluating iterative global learning algorithm indicated that adaptive
automation has the highest sum of the rewards from achieved benefit and user satisfaction and
inquisitive automation has the second highest reward values. Full automation and no automation
came in third and last spots, respectively.
The structure of this thesis is as follows: Chapter 1 provides a detailed description of the problem
definition and motivation for the research effort. In Chapter 2, the research domains relating to this
topic is discussed and the scope, in which the research effort is constructed is specified. Chapter 3
presents some definitions regarding activity recognition and automation. Chapter 4 provides a
detailed literature review on the thesis scope and research gaps studied in the thesis. Research
objectives and the research questions are presented in Chapter 5. Chapters 6, 7, 8 and 9 present
research methodology, results, and conclusions that address the research questions.
8
List of Figures
Figure 1-1 Two sample behaviors in performing the activity of preparing breakfast. ................................ 14
Figure 6-1 Components of the framework ................................................................................................... 28
Figure 6-2 Example of events generated by different appliances on real power signals............................. 30
Figure 6-3 Schematic view of the ontology with its main classes and properties ....................................... 31
Figure 6-4 Flowchart of the algorithm to identify whether two activities can be overlapped or not. ......... 34
Figure 6-5 Schematic view of the ontology in the example scenario .......................................................... 38
Figure 6-6 Example of event segmentation ................................................................................................. 40
Figure 6-7 Selected starting point of days for different test beds. ............................................................... 45
Figure 6-8 Contribution of appliances in electricity consumption of daily activities for three units .......... 49
Figure 7-1. Overview of the framework. ..................................................................................................... 53
Figure 7-2 Schematic view of the ontology of activities with its main classes and properties. .................. 58
Figure 7-3 Activity Recognition (AR) algorithm. ....................................................................................... 59
Figure 7-4 Waste Estimation (WE) algorithm. ............................................................................................ 61
Figure 7-5 Office occupants’ cumulative average daily electricity consumption and wasted energy
consumption in watt-hours. .................................................................................................................. 73
Figure 7-6 Residential occupants’ cumulative values of average daily energy consumption, wasted energy
consumption and peak-hour usage that could be shifted to non-peak hours, in watt-hours. ................ 75
Figure 8-1 Comparison of different automation levels in terms of amount of automation and user
participation. ......................................................................................................................................... 82
Figure 8-2 Distribution of respondents in each group of preferred automation type for different contexts.
.............................................................................................................................................................. 87
Figure 8-3 Cumulative distribution of respondents in each group of preferred automation type. .............. 88
Figure 9-1. Example of goal, compound and primitive tasks for command planning. ............................. 100
Figure 9-2. All possible states and actions ................................................................................................ 105
Figure 9-3. Schematic layouts of the testbeds ........................................................................................... 111
Figure 9-4. A sample time series in ground truth format and its equivalent time series for simulation.... 112
Figure 9-5. Test accuracy of the models for each participant through the days of training using stochastic
gradient decent (SGD) with log loss (logistic regression) and hinge loss (SVM classifier) and stochastic
gradient boosting. ............................................................................................................................... 117
Figure 9-6. Example variation of daily policy reward for different states (Occupant B) .......................... 120
Figure 9-7. Normalized converged Q matrices for each participant after 4 training sessions ................... 121
Figure 9-8. The average daily energy consumptions and average daily energy savings that could be achieved
for each participant via adaptive automation...................................................................................... 123
9
List of Tables
Table 6-1 List of subclasses in created ontology for the example scenario ................................................ 36
Table 6-2 List of events associated with activity-related appliances........................................................... 43
Table 6-3 Test beds characteristics and appliance categories for data separation ....................................... 44
Table 6-4 Performance measurement of data segmentation. ....................................................................... 45
Table 6-5 Average results of the 10-fold cross validation on different classifiers. ..................................... 46
Table 6-6 Confusion matrix of the Random Forest classifier.. .................................................................... 46
Table 6-7 Activities based their average daily total electricity consumption .............................................. 49
Table 7-1 Performance measurement of the action detection algorithm. .................................................... 66
Table 7-2 Performance measurement of activity recognition algorithm. .................................................... 68
Table 7-3 Artifacts’ average daily total energy consumption and percentage of waste in office testbed per
each activity. ......................................................................................................................................... 71
Table 7-4 Artifacts’ average daily total energy consumption and percentage of waste in residential testbeds
per each activity. ................................................................................................................................... 72
Table 8-1 Characteristics of the variables used in this study. ...................................................................... 84
Table 8-2 Results of type III F-tests and computed AICs for built GLMMs during model selection......... 89
Table 8-3 Estimated GLMM coefficients and their associated t-values and p-values. ............................... 90
Table 8-4 Estimated GLMM coefficients and their associated t-values and p-values. ............................... 91
Table 9-1. Details of the achieved activity datasets for two-month-long periods ..................................... 113
Table 9-2. Datasets used for evaluating our algorithms ............................................................................ 114
Table 9-3. Sample output from planning algorithm .................................................................................. 115
10
Contents
Chapter 1. Problem Definition/Motivation ............................................................................... 12
Chapter 2. Scope ....................................................................................................................... 18
Chapter 3. Definitions............................................................................................................... 20
Chapter 4. Literature Review and Research Gaps .................................................................... 21
4.1. Building Automation ................................................................................................................... 21
4.2. Activity recognition ..................................................................................................................... 22
4.3. Automation preferences ............................................................................................................... 24
Chapter 5. Objectives and Research Questions ........................................................................ 27
Chapter 6. A Framework to Allocate Disaggregated Electricity Consumption to Daily
Activities, using Offline Activity Recognition ............................................................................. 28
6.1. Methodology ................................................................................................................................ 28
Input to framework ............................................................................................................................ 29
Context-aware data separation ........................................................................................................... 31
Segmentation and activity recognition .............................................................................................. 38
Electricity consumption estimation ................................................................................................... 41
6.2. Experimental study ...................................................................................................................... 42
6.3. Results.......................................................................................................................................... 43
6.4. Discussion .................................................................................................................................... 48
6.5. Conclusion ................................................................................................................................... 51
Chapter 7. A Framework for Real Time Activity Recognition and Wasteful Behavior
Detection…….. ............................................................................................................................. 53
7.1. Methodology ................................................................................................................................ 53
Action Detection ................................................................................................................................ 54
Activity Recognition .......................................................................................................................... 56
Waste Estimation ............................................................................................................................... 60
7.2. Experimental Study ..................................................................................................................... 63
7.3. Results.......................................................................................................................................... 63
7.4. Discussion .................................................................................................................................... 69
Office testbed: .......................................................................................................................................... 69
Apartment testbeds: ................................................................................................................................. 76
7.5. Conclusion ................................................................................................................................... 77
Chapter 8. Contextual and Subjective Factors Effecting Automation Preferences .................. 79
8.1. Methodology ................................................................................................................................ 79
8.2. Survey Study ................................................................................................................................ 79
11
Questionnaire Design......................................................................................................................... 79
Data Collection .................................................................................................................................. 83
Statistical Analysis............................................................................................................................. 83
8.3. Results.......................................................................................................................................... 86
8.4. Discussion .................................................................................................................................... 94
8.5. Conclusion ................................................................................................................................... 96
Chapter 9. Activity-driven and User-centered Automation of Appliances and Lighting
Systems in Buildings..................................................................................................................... 98
9.1. Methodology ................................................................................................................................ 98
Dynamic Command Planning ............................................................................................................ 99
Learning Preferences for Different Contexts and Conditions ......................................................... 100
Learning the Changes in Preferences in Time ................................................................................. 104
9.2. Evaluation of Framework and Results ....................................................................................... 110
Data Used for Evaluation................................................................................................................. 110
Evaluation of Dynamic Command Planning ................................................................................... 114
Evaluation of Adaptive Local Learning .......................................................................................... 116
Evaluation of Iterative Global Learning .......................................................................................... 118
9.3. Discussion .................................................................................................................................. 121
9.4. Conclusion ................................................................................................................................. 124
Chapter 10. Limitations and Future work ............................................................................. 126
Chapter 11. Conclusions ....................................................................................................... 128
Acknowledgments....................................................................................................................... 131
Publications ................................................................................................................................. 132
References ................................................................................................................................... 133
12
Chapter 1. Problem Definition/Motivation
The United States is the second largest electricity consumer in the world [1]. In 2014, the U.S.
electricity use totaled nearly 3,862 billion kilowatt-hours, which is projected to increase to 4,867
by 2040. More than 70% of the U.S. electricity consumption is consumed by residential and
commercial buildings, with each sector using roughly the same amount of electricity. Among the
energy consuming service systems in buildings, lighting systems and appliances (including
computers, office equipment, televisions, other electronic devices, clothes washers, dryers,
dishwashers, and cooking appliances) together contribute to more than half of the electricity
consumption in residential and commercial buildings (i.e., 55% in residential buildings [2] and
51% in commercial buildings [3]). In addition, miscellaneous Electric Loads (MELs), including
both plug loads and hard-wired loads, are projected to increase from 6.1 to 6.9 Quads (13% growth)
in residential buildings, and from 6.5 to 8.3 Quads (27%) growth in commercial buildings between
2016 and 2030 [1]. While non-MELs building loads are projected to decrease, the share of building
energy consumption associated with MELs is projected to increase significantly between 2016 and
2030. Significant contribution of lighting systems and appliances in total electricity consumption
of buildings has ignited a growing worldwide interest to find strategies to improve energy
efficiency of these service systems.
An increasing amount of work supports the idea that awareness of personalized and detailed energy
consumption could assist occupants to reduce their consumption [4]. In response to this, various
techniques have been successfully used to measure electricity consumption down to the device
(i.e., lighting fixture and appliance) level [5–7]. With the aid of disaggregated electricity
consumption data, occupants are able to distinguish inefficient devices and hence discover possible
savings that could be achieved by substituting inefficient devices with more efficient ones.
However, deeper investigations of energy consumption in buildings have revealed that efficiency
of energy consuming devices is not the only factor involved. In fact, as shown by several studies,
occupant behavior in operating these service systems have also significant impacts on buildings’
energy consumption and hence building controls [8]. Along this line, careless behavior have found
to increase a building’s electricity consumption by one-third, while conservation behavior can save
a third [18]. An interesting example is an experiment carried out in a real office building in which
employees were informed about their energy consumption behaviors and accordingly asked to turn
13
off the office appliances and lights during peak electricity use hours [19]. The results of this
experiment showed a saving of 26% in electricity consumption. Although occupants’ awareness
of their energy consuming behaviors could result in remarkable increase in buildings’ energy
efficiency, the savings depend on the conscious actions and behavior change of the occupant,
which is not always aligned with occupants’ convenience. Thus, researchers have also focused on
approaches to automate the operation of the service systems in buildings to be more energy
efficient without requiring a behavior change in occupants. This approach has shown promising
energy saving potentials in several studies that have been investigated the application of
automation in buildings (e.g., [9]).
Strategies that use a combination of the both stated approaches (i.e., (1) encouraging occupants to
change their wasteful behavior by making them aware of their energy consumption behavior and
potential energy savings; and (2) controlling the operation of service systems to be more energy
efficient using automation) could be more influential as they would benefit from the potentials of
both. Although these approaches are different in nature, they both require a level of insight on
occupant’s behavior. In the context of improving the energy efficiency of lighting systems and
appliances, occupant’s behavior can be defined as the ways the service systems are used by
occupants during their activities. Accordingly, recognizing activities could be a venue to obtain
the required insight on occupant behavior.
Activities are combinations of different actions performed by an occupant in order to satisfy a
specific need. These actions and their durations are case sensitive. Even for a specific case, they
could be inconsistent under different circumstances. For example, the activity of preparing
breakfast can be formed by a variety of actions in different ways. Figure 1-1 shows two possible
behaviors in performing this activity.
14
Figure 1-1. Two sample behaviors in performing the activity of preparing breakfast. Reversed
activities are shown by the same color (e.g., turning on lights and turning off lights)
Variations in performing activities results in different energy consumption of activates across
individuals. For example, Fechner showed that chefs, using the same appliances for cooking the
same meal, had electricity consumptions with up to 50% difference [20]. In Figure 1-1, the slight
difference in the associated actions by which the activities are formed causes more electricity
consumption in the second behavior, as the fridge door is left open when the microwave is used
and the lights are also left on after leaving the kitchen. As illustrated in this example, by
recognizing and exploring activities, it is possible to develop an actual and personalized appliance
level model of an occupant’s behavior, based on which it is possible to detect potential energy
savings to give feedback to occupants or generate automation commands for more efficient
operation of the service systems.
Activity recognition has long been used in healthcare however, its application in building energy
management has not been fully explored yet. With activity recognition techniques, activities are
detected either online, using real time data, or offline, using historical data. While offline activity
recognition provides the knowledge on occupants’ behavior to give energy consumption feedback,
for the sake of using activity recognition to develop a user-centered automation system in
buildings, activities must be recognized in real time, so that the appropriate automation action
15
could be executed based on occupant’s current activities. By enhancing activity recognition with
specific contextual information, it is possible to detect an occupant’s wasteful behavior. The
insight on wasteful behavior not only is needed for generating automation rules, but it could also
assist occupants to make more informative decisions with respect to the potential savings.
In complex systems with several interacting components, such as buildings, in order to efficiently
achieve an operational goal (e.g., improving building energy efficiency), a set of functions need to
be carried out. These functions can be categorized into two classes of input functions and output
functions [21]. Input functions refer to the functions for data acquisition and data analysis. Given
a building equipped with a sensing system, data acquisition functions are carried out to obtain
information about occupants and physical environments within a building (e.g., an occupant’s
location in the room, energy consumption of appliances and light intensity in the room) [7,22,23].
Relying on the obtained information, data analysis functions are performed to discover knowledge
required to precede the output functions. Occupant activity and preference recognition, wasted
energy consumption detection and behavior prediction are examples of data analysis functions
[24–26]. Output functions include the functions for action selection (decision making) and action
implementation. Action selection functions involve selecting an action among the available
options. A building-related example is a case where the lights in a room are on and the room is not
occupied. In this situation, decision making function could be the selection between turning off
the lights and leaving them on. Consistent with the decisions, the functions in action
implementation are the actual execution of the chosen actions. Accordingly, for the previously
given example, if the choice of the action is to turn off the lights, the action implementation
function is to accomplish the action (i.e., turning off the lights).
Due to recent technological advancements, building automation systems are now capable of
performing many of these functions autonomously. However, research has shown that even if
technological capabilities allow, fully automating all of these functions does not always lead to the
most satisfactory operations [14,21]. An alternative solution is to harmonize the automation level
with the workload needed for the functions (i.e., amount of work required to perform the functions)
[13]. This implies that rather than having extreme cases of fully automated or fully manual
conditions only, depending on the function’s workload, various levels of automation, ranging from
zero to full automation, can be assigned to each function [27]. This is called adjustable autonomy
16
[11]. The workload of a function depends on the context, in which the function is performed (e.g.,
the context of controlling the lighting system). Tailoring the level of automation to match the
workload offers the opportunity to increase user control over the system and simultaneously avoid
user inconvenience by imposing more participation in the contexts with lower levels of workload.
This would potentially lead to less error-prone operations and hence, could enhance automation
acceptability.
In addition to the workload, in order to determine the appropriate level of automation, subjective
factors that cause individual differences in response to automation should be also considered [28].
In general, human interactions with other human or nonhuman agents have shown to be affected
by both general trust and context dependent trust, which are two dimensions of interpersonal trust
[29,30]. While general trust is part of one’s personality and hence is affected by personality traits,
context dependent trust is related to prior similar experiences. As a result, in unfamiliar situations,
general trust exerts more impact on an individual’s behavior, whereas in unfamiliar situations,
context dependent trust is more influential. Accordingly, a combination of impacts from one’s
personality traits and previous automation experiences affect a user’s response to automation.
Evidence in support of this contention is provided by studies demonstrating the effect of user’s
personality traits or demographic-related characteristics on use of a technology (e.g., adoption of
technology in households [31], use of communication tools by students [32], and acceptance of
mobile commerce [33]).
Following the concept of adjustable autonomy, the procedure for automation design in a system
should involve determination of the level of automation needed for different functions. The context
in which a function is performed, and also individual’s characteristics together contribute to
formation of the preference for the automation level, based on which the appropriate level of
automation needed for different functions could be determined. With the purpose of incorporating
automation preferences into the process of automation design, investigations on automation
preferences have been conducted in several domains, such as aircraft control [14]. However, there
still exists a gap for understanding the variation of automation preferences among the individuals
in the application of building energy efficiency.
As studies have shown, dynamicity is an essential property that needs to be considered to achieve
energy efficiency in buildings [4]. In other words, static assumptions should be avoided in
17
controlling the operation of the service systems in buildings. Accordingly, in addition to the
context and subjective characteristics, time should be also considered as a dimension by which
automation preferences could change. In response to the need for dynamicity in automation,
researchers have proposed the concept of progressive autonomy, where the level of automation
can change in time as occupant’s trust in automation increases (progressive autonomy) [16]. An
automation system that is designed based on both adjustable and progressive autonomy could be
a more powerful approach to improve user convenience while increasing a building’s energy
efficiency. Emphasizing on the preferences, such an automation system can fully or partially
control the appliances and lighting systems in the buildings based on a set of dynamic rules that
that are generated with regards to the insight from occupant’s wasteful behavior achieved through
activity recognition. Incorporating activities and automation preferences as individual
characteristics into the design process (i.e., activity-driven and user-centered automation design)
could eliminate the drawbacks of “one size fits all” design paradigm, which is currently the main
automation design paradigm for appliances and lighting systems control.
18
Chapter 2. Scope
This thesis aims at improving building’s energy efficiency by providing occupants with activity-
based electricity consumption feedback and also an activity-driven automation system that controls
the energy consuming service systems in building, while maintaining user satisfaction. The
research efforts in this thesis have been evaluated in real world multi-occupancy office test beds
and single-occupancy residential units. Accordingly, investigation of other types of buildings, such
as schools, hospitals, etc., and multi-occupancy residential units is out of the scope of this thesis.
Among the energy consuming service systems, this thesis focuses on lighting systems and
appliances (those that are used for applications other than space heating, air conditioning,
refrigeration and water heating), as these systems together contribute to considerable portion of
the electricity consumption (i.e., more than half) in commercial and residential buildings [1]. It
should be pointed out here that investigation of the activities associated with shared appliances is
out of the scope of this thesis.
In line with the research goal of this thesis, our first research objective is to use activity recognition
techniques to achieve insights about the effects of occupant’s activities on electricity consumption
in buildings. Along this line, this thesis presents two novel frameworks for online and offline
activity recognition, to detect occupant’s activities and wasteful behavior, using the data captured
from devices and environment (e.g., appliances’ states, light intensity, and occupancy) by a sensing
system. The proposed frameworks were evaluated via experimental studies in real-world office
and residential test beds. It should be pointed out here that finding a trade-off between cost and
complexity of instrumentation and information gain associated with the sensing system is out of
the scope of this research effort. Since the thesis’s focus is building’s energy efficiency, only the
activities that are associated with use of the energy consuming devices in buildings, such as
working with a computer, are included in the scope of this research effort, and other types
activities, such as physical activities, are out of the scope of this thesis.
The second objective of this thesis is to propose an activity-driven and user-centered automation
system that adapts to changes in occupant’s automation preferences and activities. To understand
user preferences for building automation systems, this thesis presents the results of our survey on
contextual and subjective factors that affect preference of automation level for appliances and
lighting systems control. The studied subjective factors include big five personality traits (i.e.,
19
agreeableness, conscientiousness, neuroticism and openness to experience) and demographic
related characteristics (i.e., age, gender, marital status, education level and income). Studying the
influence of factors, such as location, culture, and ethnicity is not in the scope of this thesis.
Finally, this thesis presents a novel framework to adaptively generate activity-driven control
commands based on the changes in user preferences in different conditions over time. In particular,
this research effort focuses on reactive control and hence proactive control is out of the scope of
this thesis. The proposed framework was evaluated using a combination of synesthetic and real
user activity and preference data from an office and an apartment. It should be pointed out here
that evaluating the performance of the hardware components of the proposed automation system
(e.g., hardware failure based on network disconnection, sensor failure rate and actuator failure
rate) and also evaluating the user interface for communicating with the automation system is out
of the scope of this thesis.
20
Chapter 3. Definitions
Activity: Combination of different actions performed by individuals in order to satisfy a specific
need [34]. In the context of building energy efficiency, cooking, leaving and entering a room,
working with computer, and watching television are examples of occupant’s activities.
Activity recognition: A formal procedure to learn and recognize activities using the data captured
by a sensing system from the states of individuals, devices and environment (e.g., occupancy,
blood pressure, body temperature, plug load, temperature, and light intensity) [34].
Automation: The Oxford English Dictionary defines automation as the use of electronic or
mechanical devices to replace human labor [35]. In the context of building’s energy efficiency,
automation refers to the full or partial replacement of a function previously carried out by occupant
[21].
Control commands: Operations that can be executed on devices are called commands [36]. When
successful, commands cause devices to change their internal state or to actuate in their environment
of influence. For example, setting the intensity of a lamp is the result of executing the command
“dim to level”. Commands can be sent to devices individually or in group if all devices of a group
can accept that command.
Automation levels: Automation levels specify a different degree to which a function is automated
[27]. This implies that automation is not all or none, but can vary across a continuum of
intermediate levels, between fully manual performance and fully autonomous conditions at the
two extremes.
Adjustable autonomy: Adjustable autonomy describes the property of an autonomous system to
change its level of autonomy to one of many levels across contexts [37].
Progressive autonomy: Progressive autonomy describes the property of an autonomous system to
change its level of autonomy in time, while the system operates and user’s trust to automation
changes [16].
21
Chapter 4. Literature Review and Research Gaps
4.1. Building Automation
Building automation is defined as a system that performs functions (fully or partially) related to
controlling and monitoring the electrical and mechanical devices that are interconnected over
communication networks to achieve comfort, energy efficiency, health and well-being [4,36].
Building automation relies on the data captured by monitoring devices to get insights on occupants
and the environment to perform automation-related tasks. These devices can be classified into two
categories: sensors to measure environmental and physiological parameters (e.g., [38–40]), and
multimedia devices to capture audiovisual information (e.g., [41]).
Different machine learning algorithms (e.g., Support Vector Machine [42], Hidden Markov Model
[43], Deep Neural Networks [44]) and semantic reasoning methods (e.g., [45,46]) have been used
by building automation to analyze the data captured by monitoring devices. Along this line, part
of the present building automation approaches offer control strategies based on real-time
recognition of occupant presence (e.g., occupancy-based lighting control using fine-grained 3-D
ultrasonic tracking [47], WiFi-based occupancy driven lighting control using online sequential
extreme learning machine [48], real-time occupancy-based appliance control using multi-agent
automation systems [49]). These approaches are generally used to control devices with immediate
response time to state changes (e.g., lighting system control [47,48]). On the other hand, there are
building automation approaches that use predictive methods, in which control strategies are based
on predicted presence patterns (e.g., [50–52]). Compared to real-time approached, predictive
methods have shown to be more appropriate solutions to control space heating/cooling due to the
long response time to state change of temperature [4].
In more recent years, building automation approaches that offer activity-driven control have
received growing interest (a.k.a., context-aware building automation). Along this line, there are
studies that propose algorithms to make predictions and decisions to control appliances based on
a probabilistic or Markov model explaining user activities (e.g., [53–55]). Since an activity-driven
building automation acquire more deepened insight on context, it can make more informative
automation decisions compared to a building automation that makes its decisions solely based on
user presence information. This could in turn result in a more effective automation.
22
Another factor that plays a significant role in effectiveness of automation is the extent to which
the automation satisfies user preference [4,56,57]. Accordingly, researchers developed approaches
to design a building automation that considers user preferences (e.g., preference for light,
temperature, and level and type of automation as influencing constraints in control (e.g., a building
automation that offers HVAC control based on user’s thermal comfort or a building automation
that control the lighting system based on user’s lighting comfort [58–61]).
As studies have shown, user preferences are subject to change in time or under different
circumstances [4,16,58]. With the purpose of designing a building automation that could adjust
itself with the dynamics of user preferences, adaptive building automation has been developed
more recently. Some examples of adaptive building automation include a building automation that
is agent-based [62,63], a building automation that uses self-adaptive neural network [64], a
building automation that uses case-based reasoning and can adapt to any manual adjustment by
modifying case data [65], and a building automation that uses machine learning techniques to
discover user behavior patterns and automatically identify changes in behavior [66].
Largely, existing approaches for adaptive automation offer limited options for automation (levels
and types) to choose from (e.g., activity automated or not [66]), without any learning procedure to
autonomously detect changes in preferences (e.g., offering only manual reconfiguration [65]).
Thus, to facilitate rapid adoption, future systems should better consider user satisfaction by
offering activity-based automation at various levels. They should be capable of improving their
performance not only in terms of activity recognition but also in terms of user “preferences’ for
different control strategies.
4.2. Activity recognition
Researchers have been exploring activity recognition for a variety of applications over the past
years. One of the domains, in which activity recognition has been effectively explored, is the
healthcare domain [67]. Studies along this line, have investigated activity recognition techniques
to monitor daily activities to observe the progress of diseases in patients and recognize abnormal
behaviors resulting from emerging medical conditions [68]. In addition to the daily activities,
detection of physical movements has also been investigated, specifically for patient fall detection
in emergency situations [69].
23
Since activity recognition techniques highly depends on the application and hence activities of
interest, the techniques that are appropriate for healthcare applications would not be necessarily
suitable for applications in other domains (e.g., energy efficiency domain). Accordingly,
researchers in the energy domain have explored the influence of occupant behavior on building
electricity consumption via analyzing historical time use and ownership data sets [70,71]. In these
studies, in order to create high-resolution electricity demand predictors, probabilistic models of
activities and their associated electricity consumptions were created. Since these types of studies
are based on large historical data sets, they do not reflect precise personalized consumption
patterns.
Relying on the sensing techniques, more recent studies have utilized offline activity recognition to
improve energy consumption awareness via the creation of activity-based energy consumption
feedback [7,25,72]. In addition, there are a few studies that proposed automation approaches to
control appliances and lighting systems, using online activity recognition [73–77]. These studies
do not offer a formal procedure for wasteful behavior detection that could be potentially used for
generating activity-driven automation rules based on occupant preferences.
Overall, compared to the healthcare domain, fewer studies exist in the energy management
domain. Thus, in order to effectively explore the potentials and applications of activity recognition
for real-time energy management, further investigation is needed.
Regardless of the application domain, the techniques used by different activity recognition
approaches (either offline or online) could be classified into two main categories: data-driven
approaches and knowledge-driven approaches [78]. In the data-driven approaches, machine-
learning techniques and probabilistic approaches, such as Hidden Markov Models (HMMs) [42],
Support Vector Machine (SVM) classifiers [42], Bayesian Networks (BNs) [79], Naive Bayes
classifiers [80] and Decision Trees [81], have been adapted to detect the activities using inductive
reasoning. The majority of the existing data-driven approaches are supervised and they require
labeled data for training. There are also a few studies based on unsupervised techniques [82], which
generally suffer from lower performance compared to the supervised approaches.
In the knowledge-driven approaches, activities along with their contextual relationships are
modeled in an ontology and accordingly new instances are detected using deductive reasoning
[83]. The advantage of knowledge-driven approaches over data-driven approaches is that they do
24
not need training. Therefore, for detecting complex activities, where acquiring sufficient training
data to achieve acceptable detection accuracy is often significantly difficult and sometimes
impossible, knowledge-driven approaches outperform data-driven approaches. On the other hand,
for detecting simple and basic activities, data-driven approaches are better choices, as training
probabilistic models with high detection accuracy is more convenient than modeling these
activities using an ontology. Since in most applications both simple and complex activities exist,
using a combination of the approaches could result in a better performance.
In order to address the stated gaps in literature, in this thesis, we explore the application of activity
recognition to improve energy efficiency in building by introducing a framework for real time
activity recognition and wasteful behavior detection via a hybrid application of inductive and
deductive reasoning.
4.3. Automation preferences
In recent years, technological advances have substantially extended the capabilities of automation
systems in buildings [84,85]. Despite the achieved advances, automation systems have not been
widely adopted by building occupants. An evidence of this problem is presented in a study carried
out in 40 offices, during 5 months, where the results indicated that the majority of the occupants
were not satisfied with the automation system and hence, they stopped using it [86].
In order to understand the barriers the authors of [86][87] carried out home visits to 14 residential
units, which used automation, and found that inflexibility, poor manageability and difficulty in
achieving trust were the main reasons behind occupant dissatisfaction.
Other investigations, based on the experiences of families living in smart houses, have revealed
that occupant dissatisfaction could significantly decrease when more control over the execution of
the functions is offered to the users [88,89]. Accordingly, researchers have proposed adjustable
autonomy, where various levels of automation (ranging from zero to full) is provided in different
contexts, in order to increase user control over the automation system [28]. Along this line, the
authors of [90] suggested a novel approach to provide adjustable autonomy, where decision
making was occasionally transferred to the user. In another study, an adaptive automation system
that gradually could become autonomous by learning user preferences was proposed to control
25
appliances in a hospital room [91]. More recently, agent-based approaches also were used to
provide adjustable autonomy in buildings to increase energy efficiency [62,63,92].
Since in adjustable autonomy, automation level is altered to match the automation needed in
different contexts, the procedure of automation design should involve determination of user
automation preferences. In a study by Parasuraman et. al., pilots performing monitoring a task in
a simulated flight were offered with three option of automation matched with the workload,
automation poorly matched with the workload, and no automation [14]. Among the provided
options, workload-matched automation was the most preferred option and resulted in the best
monitoring performance. In another study carried out on terrorist threat detection in a simulated
environment, the effects of personality and task (equivalent to context in our study) on user
response to automation (in terms of user performance and stress level) were investigated [28].
Findings of this study indicated that the variable of task had stronger influence on automation
response than the variable of personality did. The study also showed that automation response to
a particular task was not always common across individuals, due to personality-related effects.
These effects found to be mainly negative for neuroticism, as opposed to other personal traits, such
as extraversion and agreeableness, for which positive effects were generally reported.
There are also studies conducted on user automation preferences in buildings. Along this line, Ball
et al., carried out a survey to assess university students’ automation preferences to control different
services in a smart home (i.e., controlling doors and windows, monitoring energy usage, automatic
indoor heating, automatic room lighting, and providing entertainment services such as choosing
movies) [93]. The survey results revealed the usefulness of adjustable autonomy, as participants’
automation preferences were found to be different across the services (equivalent to context in our
study).
In another study, the authors have presented the results of an empirical study conducted in six
different sites, where the participants were asked questions regarding the explained automation
scenarios of intelligent entertainment systems (e.g., playing and finding movies, music and games,
and adjusting light and sounds in a room for movie watching or game playing); and intelligent
home caring systems (e.g., controlling the doors, finding cooking recipes, and detecting faults in
26
operation of a washing machine) [94]. The study showed that users would prefer an automation
system that is easy to use and configurable to accommodate individual settings and preferences.
Among the studies on user preferences for automation in buildings, there are number of studies
that carried out their investigations with particular focus of energy efficiency. These studies mostly
explored user preferences on the type of feedback they wanted to receive from an automation
system regarding their energy consumption, e.g., aggregated electricity consumption vs.
disaggregated appliance-level electricity consumption [95–97], rather than automation type or
level.
There are few studies with limited scope that focused on user preferences for automation type and
level, in energy efficiency domain. For example, findings of a pilot study with 26 participants on
user satisfaction level associated with different control strategies of dynamic façades indicated that
increasing the user control over the façade automation via offering the opportunity to manually
override the commands led to significantly higher satisfaction levels [98].
In another study, the authors of [99] conducted a scenario-based study to investigate automation
preferences with regards to three contexts, i.e., control of indoor thermal environment, peak load
management, and onsite energy production, using the data from qualitative interviews with 14
participants. The study showed that full automation was not a preferable option for automating
functions related to indoor thermal control. In addition, substantial mistrust on automation was
reported, which could potentially be decreased by carefully choosing the automation level.
In summary, existing studies on user automation preferences in buildings does not cover the
contexts associated with different automation strategies related to energy efficiency of appliances
and lighting systems. Moreover, neither of these studies has investigated the effects of individual’s
characteristics, in terms of both demographics and personality, on automation preferences in these
contexts.
27
Chapter 5. Objectives and Research Questions
Overall objective: To improve energy efficiency of appliances and lighting systems in buildings
by providing occupants with activity-based electricity consumption feedback and also an activity-
driven automation that controls the operation of appliances and lighting systems, while
maintaining user satisfaction.
Objective 1: To achieve insights about the effects of occupant’s activities on energy consumption
of appliances and lighting systems in buildings.
➢ Research question 1: How to achieve insights about the effects of occupant’s activities
on energy consumption of appliances and lighting systems in buildings, in order to
generate activity-based energy consumption feedback? (Chapter 6)
➢ Research question 2: How to achieve insights about the effects of occupant’s activities
on energy consumption of appliances and lighting systems in buildings, in order to offer
an activity-driven appliance and lighting system automation in building? (Chapter 7)
Objective 2: To reduce appliances and lighting system related energy consumption in buildings by
providing an activity-driven and user-centered automation that adapts to changes in occupant’s
automation preferences and activities.
➢ Research question 1: How do occupant’s preferences for automation of appliances and
lighting system in building vary by contexts and occupant’s characteristics
(demographic-related and personality-related)? (Chapter 8)
➢ Research question 2: How to reason about occupant’s activities and automation
preferences to generate control commands to operate appliances and lighting system in
building? (Chapter 9)
➢ Research question 3: How to adapt to dynamics of occupant’s automation preferences
in different conditions and over time? (Chapter 9)
28
Chapter 6. A Framework to Allocate Disaggregated Electricity
Consumption to Daily Activities, using Offline Activity Recognition
6.1. Methodology
Our framework for appliance-level disaggregated electricity consumption for daily activities has
three main parts: Context-aware data separation; Segmentation and Activity recognition; and
Electricity consumption estimation (Figure 6-1). As an input, this framework receives the labeled
data of appliance usage, provided by an NILM system. The input data is first separated based on
context information using ontological reasoning. The separated data is then segmented in order to
detect the lengths of activities and generate the feature vectors, by which the activities are detected
via a classification model. Finally, the contribution of appliances in electricity consumption of the
recognized activities is estimated. The mentioned parts are extensively explained in the following
sub-sections.
Figure 6-1 Components of the framework
29
Input to framework
The input to our framework is disaggregated appliance usage data provided by an event-based
NILM system that uses a single sensing point on main power line. Electricity disaggregation using
NILM technique was first introduced by Hart almost 30 years ago [100]. This technique relies on
the idea that changes in the state of appliances (on or off), known as events, generate distinctive
signatures in the power signal. In order to detect these signatures, we used a Generalized
Likelihood Ratio Test event detector, which is a probabilistic event detection algorithm [101–103].
Figure 6-2 shows some examples of the events generated by different appliances on the power
signal.
The differences in the amplitude and shape of the signatures, suggest that using a previously trained
classifier, it is possible to map the detected events onto the appliances. As it is common in all
classification problems, performance of the classifier is highly related to the features that are used
for classification. Studies have shown that the extracted features from power signal depend on the
resolution of the signal. The higher the signal resolution, the more information can be obtained
from the extracted features. Therefore, in this study, in order to acquire high-resolution signal
measurements, we used a high-frequency data acquisition system, consisting of voltage and current
sensors installed on the main AC power line. The main components of AC power flow are: real
power, reactive power and apparent power. Real power represents the capacity of the circuit and
it is consumed thoroughly in the load. However, reactive power, which is generated when there is
a phase difference between current and voltage waveforms, is first stored in the load and then
returned to the grid. The vector sum of real and reactive powers is called apparent power. Among
these three components, i.e., real power, reactive power and apparent power, real and reactive
powers were used as features for classification. In order to extract these features, the acquired
current and voltage measurements were first processed using mathematical transforms. Along this
line, based on an approach presented in [104], we applied Short-time Fourier transform on current
and voltage to obtain the transformed vectors (I and V, respectively). The presence of non-linear
loads in a power system causes harmonic frequencies that must be taken into account in real and
reactive power computations. Using the transformed vectors of I and V the real and reactive power
components for different harmonics were computed via the following equations [105]:
𝑷 𝒌 (𝒕 ) = |𝑰 𝒌 (𝒕 )| . 𝐬𝐢𝐧 (𝜽 (𝒕 )) . |𝑽 𝟏 (𝒕 )| Equation 6-1
30
𝑸 𝒌 (𝒕 ) = |𝑰 𝒌 (𝒕 )| . 𝐜𝐨𝐬 (𝜽 (𝒕 )) . |𝑽 𝟏 (𝒕 )| Equation 6-2
In these equations, k represents the harmonic index and can get values from 1 to 9. 𝑷 𝒌 , 𝑸 𝒌 and 𝑰 𝒌
are, respectively, real power, reactive power and transformed current waveform for the kth
harmonic. 𝑽 𝟏 represents the normalized negative-frequency coefficient of the transformed voltage
waveform for the first harmonic. Finally, θ is the angle of 𝑽 𝟏 , which represents the voltage phase
shift relative to the Fourier Transforms window.
For each detected event, the computed real and reactive powers for different harmonics via
equations (1) and (2), in a time series segment with fixed length that contains the event, forms the
feature vector 𝑿 𝒏 [105]:
𝑿 𝒏 = { 𝑷 𝟏 [𝒏 ] ,𝑸 𝟏 [𝒏 ],… ,𝑷 𝒌 [𝒏 ] ,𝑸 𝒌 [𝒏 ] } Equation 6-3
Obtained feature vectors of detected events were then classified into appliance classes using a
classifier that was previously trained with the NILM labeled data. For each classified event i, a
vector (E) carrying the label (l), time of occurrence (t) and change in first harmonic real power
(ΔP) associated with the event was stored in a local database (𝑫𝑩 𝑰𝒏𝒑𝒖𝒕 ), which was later used as
an input for the rest of framework:
𝑬 𝒊 = { 𝒍 𝒊 ,𝒕 𝒊 ,∆𝑷 𝒊
} Equation 6-4
𝑫𝑩 𝑰𝒏𝒑𝒖𝒕 = { 𝑬 𝒊 } Equation 6-5
Figure 6-2 Example of events generated by different appliances on real power signals
31
Context-aware data separation
Appliances are electrical devices that perform specific jobs in houses. In this chapter, we classify
appliances into three main categories: background appliances, lighting appliances, and activity-
related appliances. Those appliances that are not directly associated with daily activities are
classified as background appliances (e.g., an HVAC system, water heater and fridge). As the
operation of these appliances is function of various factors, it is not possible to directly allocate
their energy consumption to occupant’s daily activities. Therefore, background appliances are not
included in the focus of this study and accordingly the data of their events is eliminated from the
rest of the input data. Once the data of background appliances is eliminated, the remaining data is
either associated with lighting appliances (e.g., lighting fixtures), which can be used both during a
daily activity or not, or with the appliances that are always used during an activity (e.g., a
microwave), and we categorize them as activity-related appliances. Since the events associated
with activity-related appliances typically occur during an activity, they are appropriate indicators
of daily activities. Hence, the events of activity-related appliances are used for activity recognition.
However, for energy estimation of detected activities, in order to consider the contribution of
lighting appliances in energy consumption of the activity, the lighting appliances are also taken
into account in addition to activity-related appliances.
Figure 6-3 Schematic view of the ontology with its main classes and properties
In addition to the mentioned data separation of activity-related appliances from lighting appliances,
in order to detect overlapping activities, i.e., activities that are performed in overlapping time
intervals, further event separation of activity-related appliances based on potential overlapping
32
activities is carried out. For brevity, herein after, by appliances and events, we mean activity-
related appliances and the events associated with them. Although manual identification of potential
overlapping activities in simple cases with small number of activities is possible, we propose an
automated approach to support both scalability and adaptability, using Web Ontology Language
(OWL) [106,107]. In order to identify whether there is a possibility for given activities to be
performed in overlapping time intervals, additional context information is required. For instance,
an occupant is preparing breakfast, using toaster and coffeemaker, while listening to music on the
radio, which is also located in the kitchen. In this example two activities of preparing breakfast
and listening to music can be performed simultaneously as they require uncommon appliances that
are located in the same space. However, being associated with different appliances does not
necessarily mean that two activities are overlapped. For example, although one might use
coffeemaker only for preparing breakfast and oven only for preparing dinner, these two activities
cannot be overlapped as they are habitually always performed in non-overlapping time intervals
of the day. Accordingly, we use time, space and appliances as indicators of context and identify
the overlapping activities based on the modeled context indicators. Along this line, we use
ontology to formally model activities and stated context indicators as concepts that are connected
with predefined taxonomic and non-taxonomic relationships. Figure 6-3 illustrates the created
ontology. Such ontology contains classes (shown by ovals) as sets of individuals with common
characteristics and properties (shown by arrows) as relationships between individuals of different
classes.
In order to model the ontology, we used OWL, which is based on a family of knowledge
representation language called descriptive logic (DL) [108]. OWL makes it possible to model the
semantics of concepts in a well-structured format, i.e., DL knowledge base, and derive
mathematically proven facts using deductive reasoning. Two main components of DL knowledge
base are TBox and ABox. TBox, which is the terminological component of the knowledgebase,
holds the descriptive statements about the classes and properties. On the other hand, ABox, which
is the assertional part of knowledgebase, is related to statements about individuals. For example,
“Activity of preparing breakfast requires coffee maker or toaster, which are located in the kitchen”
is a sample statement of TBox, while “Occupant’s current location is kitchen and he is using
coffee maker and toaster to prepare breakfast” is a sample ABox statement. Other than general
definitions of classes and properties, in order to attain an adequate representative model, based on
33
which accurate reasoning can be carried out, additional TBox assertions must be added to the
knowledge base as explained below:
Subsumption: Subsumption supports the subclass-superclass relationship. Subsumption states that
all members of a class are also members of their superclass. All main classes of our ontology have
subclasses. For instance, activity of watching television (WatchingTV) is subclasses of class
activity (Activity) and living room (LivingRoom) is subclass of class space (Space). Since
all classes are subclass of class Thing in minimum level, watchingTv is also member of class
Thing, as its superclass (Activity) is subclass of class Thing.
Disjointness: OWL is a language with open world assumption. Based on this assumption, a
statement cannot be false unless it is unconditionally proven to be false. For example, being a
member of a class does not falsify the membership of another class. Accordingly, a given
occupant’s current location can be both member of class Kitchen and class Bedroom, which
is a contradiction in reality. In order to address the explained issue, we use Disjointness assertion.
Disjointness assertion states that two disjoint classes do not have any common member.
Considering open world assumption, all defined classes may have common members unless their
disjointness is asserted in the knowledge base.
Class Restriction and Union: Restrictions on classes illustrates whether an individual is eligible
for class membership. We use these restrictions for subclasses of class Activity but it can be
further extended to other classes as well. For example, instances of class
PreparingBreakfast needs at least one appliance from instances of class Coffeemaker
or class Toaster. Here is where we need to use assertion of union, as several activities are
associated with more than one appliance and not necessarily require all of them. In previous
example of preparing breakfast, one might use only coffeemaker one day and the other day both
coffeemaker and toaster. In either case, as long as the restriction of using at least one appliance
from the stated group is satisfied, the reasoner concludes that activity of preparing breakfast is
performed.
34
Figure 6-4 Flowchart of the algorithm to identify whether two activities can be overlapped or
not.
The box shows the core part of the algorithm. KB is the knowledge base, which consists of TBox
and ABox. A, App and S represent main classes of Activity, Appliance and Space, respectively.
The output of the algorithm, i.e., Overlapping, is an array that holds pairs of numbers. Each pair
represents the indices of two appliances that are associated with possible overlapping activities.
As explained, assertions of class and property definitions along with some additional assertions
form the TBox component of the knowledgebase. In order to identify the overlapping activities,
we propose the algorithm shown in Figure 6-4 flowchart. In this algorithm, the possibility of each
two activities being occurred at the same time interval is investigated, by adding statements into
ABox and check whether the added information (ABox) complies with previous assertions in
TBox. The algorithm receives the knowledgebase (KB) and the arrays of A, App and S,
representing the set of all subclasses in main classes of Activity, Appliance and Space,
35
respectively, and constructs an empty array for output, called Overlapping. For all possible 2-
permutations of activities, appliances and spaces, the algorithm adds assertions for individuals to
ABox. Since for each activity subclass, we model a particular time subclass with defined naming
format that is predictable from the name of activity subclass (e.g., for class of
PreparingBreakfast we have PreparingBreakfastTime), a separate loop for time
subclasses is not considered and the algorithm adds the time assertions with regards to activity
assertions. Following the modification of ABox, the reasoner runs a consistency check for current
condition of the knowledgebase. If the knowledgebase is consistent, it can be concluded that the
asserted two activities and consequently the events associated with the related appliances can be
overlapped. The algorithm adds these appliances into Overlapping array, which will be later used
as basis for data separation. For next iteration, the added assertions are removed from the
knowledge base.
The following example better illustrates the explained procedure:
“We are studying 4 activities of a female occupant, who lives in a one-bedroom apartment. The
daily activities are: preparing breakfast, preparing dinner, watching television, and grooming.
According to the occupant’s lifestyle, she always prepares breakfast and takes a shower before
leaving home to work and comes back at night and prepares her dinner. For preparing breakfast,
the occupant uses electrical appliances including coffeemaker, oven and toaster, but not always
all of them. For preparing dinner, the occupant uses the same appliances that are used for
preparing breakfast, except the coffeemaker. In the morning following taking a shower, the
occupant spends more time in the bathroom using the hair dryer or iron for grooming. In addition
to these activities, this occupant watches television throughout the time she is at home. Although
the television and the DVD player are located in the living room, since there is a view access from
her open kitchen to the television in the living room, the occupant sometimes watches television
while preparing a meal. “
The above scenario is human understandable and accordingly it is a trivial task for human to
identify whether two activities can be overlapped or not. However, in more complicated situations,
where more variations in activities, appliances and occupants exist, we need to rely on a systematic
36
approach to avoid potential human errors and misconceptions. Along this line, we converted the
stated scenario into a machine understandable format:
We are investigating 4 activities; therefore, we define 4 subclasses in class Activity and
consequently 4 subclasses in class Time, representing the time in which each activity is
performed. Based on these activities, we model 7 subclasses in Appliance class and three
subclasses in class Space in order to represent the location of these appliances. Table 6-1
summarizes the names of the explained classes.
Table 6-1 List of subclasses in created ontology for the example scenario
Finally, to model the relationships between these classes we define 3 properties, i.e.,
isPerformedAt, requires and isLocatedIn. Figure 6-5 shows how these properties
relate the instances of different subclasses. As shown in this graph, all subclasses are disjointed
except T(1) and T(3), to model the fact that activities of preparing breakfast and watching
television can be performed at the same time; T(2) and T(3), to model the fact that activities of
preparing dinner and watching television can be performed at the same time; and finally S(1) and
S(2), to model the fact that the open kitchen space is not completely isolated from the living room.
These assertions create the TBox component of the knowledgebase, but the ABox is still empty.
To tests whether two particular activities can be overlapped according to the modeled scenario, we
need ABox assertions as follows:
Assertions for individuals:
37
Assertions for properties:
Accordingly, in each iteration, the algorithm adds two assumed instances of activities (sample-
activity1 and sample-activity2) to the ith and jth activity subclasses (A(i) and A(j)); an assumed
instance of time (sample-time), which is the member of both the ith and jth time subclasses (T(i)
and T(j)); two assumed instances of appliances (sample-appliance1 and sample-appliance2) to the
mth and nth appliance subclasses (App(m) and App(n)); and finally an assumed instance of location
(sample-location), which is a member of both the wth and zth space subclasses (S(w) and S(z)). In
order to relate the defined instances, the algorithm also adds assertions for properties in ABox.
These assertions denote that sample-activity1 and sample-activity2 are both performed at sample-
time. The former activity requires sample-appliance 1, while the latter requires sample-appliance2.
Both sample-appliance1 and sample-appliance2 are located in sample-location. Following the
addition of these statements, in order to check whether the newly added statements are not in
contradiction with the permanent TBox statements, the consistency of the knowledgebase is tested.
In case the knowledgebase is consistent, which means one can perform the ith and jth activities at
the same time using the mth and nth appliances, the pair of (m,n) is stored in output array
(Overlapping). Since mth and nth appliances are associated with possible overlapping activities,
the events related to these appliances must be separated for activity recognition. The stated
procedure iterates until all possible cases are tested. In this example, the output of the algorithm
will be:
{(𝟏 ,𝟒 ), (𝟐 ,𝟒 ),(𝟑 ,𝟒 ),(𝟏 ,𝟓 ),(𝟐 ,𝟓 ),(𝟑 ,𝟓 )}
38
which implies that the events associated with App(4) and App(5) must be separated from the events
associated with App(1), App(2) and App(3). Accordingly, the events must be separated into
different sets. First set contains the events associated with App(1), App(2) and App(3) and the
other one contains the events associated with App(4) and App(5). Other events, i.e., events
associated with App(6) and App(7), can be included in first set, the second set or a new set of their
own.
Figure 6-5 Schematic view of the ontology in the example scenario
Segmentation and activity recognition
Segmentation
The datasets generated from the context-aware data separation step, i.e., activity-related appliance
events associated with non-overlapping activities, are segmented into active and inactive intervals
through an unsupervised segmentation process. Active intervals are those durations, in which the
occupant is performing an activity and therefore, events affiliated with appliances occur. In
contrast, during inactive intervals, due to the inactivity of the occupant, there are no events. Since
in this chapter, we are investigating activities that are associated with appliances, we assume that
the minimum number of events in an active segment is two, i.e., one for appliance state change to
39
“on” and the other to “off”. For example, a segment associated with watching television might
contain only one “on” and one “off” event for television. On the other hand, being associated with
several appliances or multiple uses of an appliance, an activity might hold more than two events.
For example, a segment representing the activity of preparing breakfast might contain several “on”
and “off” events for using microwave, toaster and coffeemaker.
Given that the datasets are analyzed on a 24-hour basis, in order to detect the active segments, we
used the heuristic that the lengths of activities and consequently the time differences between an
activity’s associated events are significantly smaller than inactive intervals between the activities.
In other words, considering the events in a day, the activities are associated with segments that
have relatively dense events. Hence, by estimating the density function of occurrence time of
events, it is possible to detect the activity segments bounded by inactive intervals with zero density.
For this purpose, the density functions are first estimated using the Kernel Density Estimation
(KDE) with Gaussian kernel function via Equation 6-6 [109].
𝒇 (𝒙 )
𝒉 =
𝟏 𝒏𝒉
∑ 𝑲 (
𝒙 −𝒙 𝒊 𝒉 𝒏 𝒊 =𝟏 ) Equation 6-6
where 𝒇 is the estimated density function of a sample (𝒙 𝟏 ,𝒙 𝟐 ,…,𝒙 𝒏 ), 𝑲 is the kernel function and
𝒉 is the smoothing parameter called the bandwidth. Among the available kernel functions, we used
the Gaussian kernel function, which is the most commonly used. We calculate the optimum value
of the bandwidth via an approach based on minimizing the expected loss function, proposed in
[110]. Figure 6-6 a visualizes the events, occurred in a sample kitchen during a 24-hour day, in a
chronological order. Figure 6-6 b demonstrates how active and inactive segments of the dataset
in Figure. 6-6 a could be detected via the KDE.
40
Figure 6-6 Example of event segmentation
(a) chronological order of kitchen events occurred during a sample day; (b) Segmentation of the
events shown in figure (a). Horizontal red lines illustrate the active segments.
Activity recognition
As explained before, an active segment holding m events starts and ends with events at time 𝒕 𝒊
and 𝒕 𝒊 +𝒎 , respectively. Following the segmentation process, feature vectors (FV) of the active
segments (Seg) are computed. The feature vectors consist of two parts. Each component in the first
part corresponds to an appliance and carries the total duration, in which the appliance is used within
a segment (∆𝒕 𝑨𝒑𝒑 𝒋 ), and the second part is the start time of the segment:
𝑺𝒆𝒈 𝒕 𝒊 −𝒕 𝒊 +𝒎
= {𝑬 𝒊 ,𝑬 𝒊 +𝟏 ,…,𝑬 𝒊 +𝒎 } Equation 6-7
𝑭𝑽
𝒕 𝒊 −𝒕 𝒊 +𝒎 = {∆𝒕 𝑨𝒑𝒑 𝟏 ,∆𝒕 𝑨𝒑𝒑 𝟐 ,… ,∆𝒕 𝑨𝒑𝒑 𝒏 ,𝒕 𝒊
} Equation 6-8
In order to map the segments to daily activities, a classification algorithm is required. Since
classification is a supervised procedure, for a given set of daily activities, obtained labeled
instances, i.e., feature vectors whose class label are known, are initially used to train the
classification algorithm. Following the training process, the trained classifier is used to classify
the new instances, i.e., feature vectors whose class label are not known, into activity classes. More
41
details regarding the training and selection of classification algorithm are provided in section 6.3.
Electricity consumption estimation
Following the activity recognition, the framework estimates the electricity consumption of
detected activities via multiplying the power consumption of appliances by the duration of
appliance usage. As depicted in Figure 6-2, this part of the framework receives two inputs:
classified activity segments and the lighting dataset containing the data related to lighting events.
In order to determine the approximate contribution of each activity-related appliance in electricity
consumption associated with an activity, for each segment, the energy consumption of an appliance
usage is computed via the following equations:
(𝑷 𝒂𝒗𝒆𝒓𝒂𝒈𝒆 )
𝒖 𝒊 =
|(∆𝑷 𝒐𝒏
)
𝒖 𝒊 |+|(∆𝑷 𝒐𝒇𝒇 )
𝒖 𝒊 |
𝟐 Equation 6-9
(∆𝐭 )
𝐮 𝐢 = (𝐭 𝐨𝐟𝐟 )
𝐮 𝐢 -(𝐭 𝐨𝐧
)
𝐮 𝐢 Equation 6-10
(𝑬𝑪 )
𝑨𝒑𝒑 𝒋 = ∑ [(∆𝒕 )
𝒖 𝒊 × (𝑷 𝒂𝒗𝒆𝒓𝒂𝒈𝒆 )
𝒖 𝒊 ]
𝒏 𝒊 =𝟏 Equation 6-
11
Where 𝒖 𝒊 is the ith usage of an appliance within the activity segment, (∆𝑷 𝒐𝒏
)
𝒖 𝒊 and (∆𝑷 𝒐𝒇𝒇 )
𝒖 𝒊 are,
respectively, the power changes associated with on and off events of 𝒖 𝒊 , (𝒕 𝒐𝒏
)
𝒖 𝒊 and (𝒕 𝒐𝒇𝒇 )
𝒖 𝒊 are
the start and end times of the activity segment, and finally(𝑬𝑪 )
𝑨𝒑𝒑 𝒋 is the estimated energy
consumption of the appliance during that activity. As shown in these equations, we assume that
the power consumption of the appliance in each usage is constant and equal to the average of
power changes in on and off events associated with that usage. Multiplying the calculated average
power consumption by the duration of appliance usage gives the energy consumption of the
appliance during that operation. The total energy consumption of the appliance during the activity
is achieved by summing over the energy consumption of all operations of that appliance in that
activity segment. Although this approach is not precise, it gives an acceptable approximation of
energy consumption for those appliances whose power consumption is almost constant or linear
during their different modes of operation or cycle, which is also the case for the majority of
activity-related appliances. To compute the contribution of lighting appliances in electricity
consumption of the activity, the length of the activity, i.e., the length of the activity segment, is
multiplied by the power value of the turned on lights that are located in the space that activity is
42
performed.
6.2. Experimental study
In order to evaluate the performance of our presented framework, we carried out experimental
validation in three test bed apartment units. Of these three test beds, two of them were one-bedroom
units with 4 separable spaces including kitchen, bathroom, living room and bedroom; and the third
one was a studio, with 3 separable spaces including kitchen, bathroom and living room. All units
were single occupancy units and they were located in the city of Los Angeles. The data acquisition
was carried out for 11, 13 and 16 days in the three units. The occupants were asked to continue
their regular activities during the experiments. The test beds were equipped with our prototype
NILM system, monitoring the main power line at the electricity panel available inside the
apartments. To do so, voltage and current sensors, i.e., Pico TA041 –25 MHz ± 700V differential
probe and Fluke i200 AC current clamps, were installed on the main circuit breaker in each
apartment for high frequency sampling (1 kHz).
The collected voltage and current waveforms were processed into power metrics, i.e., real and
reactive power time series as explained in section 6.1.1. In order to evaluate the performance of
the NILM system, individual plug meters Enmetric Powerports, and ambient light sensors, i.e.,
Linksprite DiamondBack microcontrollers, equipped with a WiFi module and AMBI ™ light
intensity sensors, were used to provide the ground truth for plug loads and lighting fixtures,
respectively. Prior to starting our experiment, the NILM system was first trained through a two-
week real time training phase. During this period, at the time of using an appliance, the occupant
was informed by the system’s interface to manually label the detected event by choosing the proper
appliance associated with that event.
Following the training process, the NILM system automatically labeled the instances. Table 6-2
summarizes the possible events of activity-related appliances and the labels we used to represent
them for our experiments. As demonstrated in Table 6-2, the label of an event is a five-digit number
with two parts separated by a zero. The first part is a three-digit number representing the appliance
name. The second part is a one-digit number indicating the type of event (on or off). There are two
types of events: “on” events with positive power change, represented by even numbers, and “off"
events with negative power change, represented by odd numbers.
43
Table 6-2 List of events associated with activity-related appliances
The ground truth labels for activities were provided by a combination of occupants’ written diaries,
in which the occupants were asked to write down the start time and the duration of their performed
activities as well as the appliances that were used during their activity, and annotations based on
the sensor data playback (i.e., plug meters and light sensors) and interviewing occupants regarding
their typical activities. The learned activities included: preparing breakfast, lunch, dinner and
snack, watching television and grooming.
6.3. Results
Following the completion of data collection, the input databases were first separated according to
the separation categories identified via the algorithm presented in section 6.1. Along this line, for
each unit, based on an occupant’s daily activities and associated context information, i.e., an
occupant’s habits in performing the activities, appliances that are used during the activities and
existing spaces in the unit, the TBox component of the knowledgebase for context-aware data
separation was built. As explained in our proposed algorithm in section 6.1.2, by adding various
ABox statements iteratively, the consistency of the knowledgebase under different scenarios of
overlapping activities was tested. Using the algorithm, we detected the associated appliances with
all possible overlapping activities perfectly. Based on these appliances, we determined the
categories for data separation. These categories along with the characteristics of each test bed are
provided in Table 6-3.
44
Table 6-3 Test beds characteristics and appliance categories for data separation identified using
the method explained in section 6.1.2
The data that were separated based on the appliance categories were then used to detect the activity
segments via the unsupervised segmentation process (explained in section 6.1). As noted before,
datasets are analyzed on a 24-hour basis, i.e., a day. The starting point of days for each test bed
was chosen in a way that no activity segment would be cut into two pieces. We chose these points
by finding the common hours, in which the occupant has been inactive during the test days and
therefore there was zero chance of cutting an activity segment. In Figure 6-8, for each test bed, all
occurred events during the test days are depicted. In addition, the selected starting point of days is
shown by a vertical line. It can be seen that these points vary by case as different individuals have
different daily routines. Since not everyone has consistent inactive periods through the days,
finding a single starting point, which is applicable to all days, might not be possible. In these cases,
days are first grouped based on their similarities and then each group is explored separately to find
the starting point for segmentation.
45
Figure 6-7 Selected starting point of days for different test beds. The occurred events are shown
in chronological order. The vertical red line shows the starting point of day.
In order to evaluate the segmentation results, the start time and duration of detected segments were
compared with the ground truth duration and start time of the activities. The comparison results
showed that the data was segmented with a precision of 0.95, a recall of 0.98 and a F-Measure
value of 0.97 (Table 6-4). Since the precision is smaller than the recall, we concluded that the error
was mainly a result from the true positives, which happens when there are gaps between groups of
events within an activity segment.
Table 6-4 Performance measurement of data segmentation.
Following the segmentation, the feature vectors representing activity segments were computed.
The feature vectors along with the associated ground truth labels were used to train selected
common classification algorithms, including Random Forest, C4.5 Decision Tree, Naïve Bayes,
46
and Support Vector Machine. In order to evaluate the performance of the stated algorithms, we
carried out a 10-fold cross validation. The average result of three units is summarized in Table 6-
5.
Table 6-5 Average results of the 10-fold cross validation on different classifiers.
As depicted in Table 6-5, Random Forest classifier with average total accuracy of 93.41 %
outperformed the others. Hence, Random Forest was chosen as the classification technique to be
used in our approach. The confusion matrix provided in Table 6-6 illustrates the average of the
performance of the chosen classifier in classifying the instances corresponding to different classes
of activities for three units.
Table 6-6 Confusion matrix of the Random Forest classifier. The values show the percentage of
instances in each class.
As shown in Table 6-6 the main misclassification of the classifier was related to the activities of
preparing dinner and preparing lunch. There are two possible reasons for such a misclassification:
1) the short length of segments in some cases for preparing dinner, which has been resulted in a
confusion with preparing snack; and 2) the close starting time of preparing lunch and preparing
breakfast for some instances, which has led to misclassification into preparing breakfast.
The high average total accuracy achieved through our experimental validation, i.e., 93.41 %, is
47
comparable with the accuracy of other high-performing activity recognition approaches. By
relying on the data provided by a single sensing point, our proposed approach reduces the cost and
complexity in comparison to common activity recognition approaches, which require installation
of multiple sensors. Taking the work done by the authors in [73] and [111] as two examples of
representative efforts in this area, in order to monitor an occupant’s activities in a typical one
bedroom apartment unit, there is a demand for at least 50 sensors, such as motion sensors, door
sensors or ambient light sensors. Taking into consideration that the price of these types of sensors
is about 10 dollars in average, 50 sensors would cost approximately 500 dollars. Moreover, in
order to map the activities to energy consumption of appliances, there is a need for plug meters.
This could cost at least an additional 500 dollars. Accordingly, in terms of the required sensors,
using the existing activity recognition approaches, detection of daily activities and associating
them with appliance usage would cost almost twice as much as what our proposed approach in this
chapter costs, i.e., 380 dollars for voltage and 160 dollars for current sensors.
Moreover, to get 50 sensors working adequately along with the plug meters, they must be
connected to one or more routers, which themselves are connected to a core computer with a
software to manage and synchronize the data. This means using existing approaches for activity
recognition also increases the costs associated with a wireless sensor network, such as the cost for
the routers and also the software to manage and synchronize the data from various types of sensors
with different sampling rates and data types. In addition, making a sensor network to function
properly cannot be achieved by only providing these tools, as sensor failure and loss of connection
are known drawbacks of wireless sensor networks. Hence, in order to address these drawbacks,
there is a need for network auditing by an expert on a regular basis.
Due to the high number of sensing points, which are connected wirelessly to the main computer
and which continuously send data, the labor commitment and consequently the cost associated
with auditing the sensor network system is much higher than the cost of auditing our system, in
which there is only one sensing point that is connected to the main computer with wires. Based on
this discussion, the difference between the cost and complexity of existing approached and cost
and complexity of our approach, is much higher than simply the difference between the costs of
the required sensors, as our approach reduces the cost and complexity by eliminating the need for
a wireless network and dedicated labor.
48
Achieving high accuracies in activity recognition requires the implementation of supervised
learning algorithms. Consequently, user engagement during the training process, in which the user
records the activities along with activity start and end times is a common requirement in all high-
performing daily activity recognition approaches. Therefore, compared to the existing work, e.g.,
[73] and [111], in terms of type of user engagement our approach is no exception, as it does not
add any extra burden on the user during the training period.
Considering the presented results in three test beds, with average training period of 13 days, our
approach is also similar to the existing approaches, in terms of the required time for training of the
activity recognition model. However, what is different in our case is that in addition to the training
requirement for activity recognition, there is a necessity for training of the NILM system, which,
after all, results in user engagement over a longer period of time. Yet, we can potentially combine
these two efforts, which is part of our planned future work.
Following the activity detection, the approximate electricity consumption of the detected activities
was estimated, based on the average power consumption and the usage duration of appliances
within the corresponding activity segments. As the main focus of this chapter is the activity
recognition, we did not investigate the precise calculation of energy consumption of the appliances.
Accordingly, in order to achieve an acceptable approximation, we compared our calculated power
consumption of appliances with average power consumption from the plug meters and for cases
where the difference was more than 10%, we used the plug meter value.
Figure 6-9 presents the average contribution of different appliances in electricity consumption
associated with daily activities in the three units during the experiment period. In addition to the
contribution of appliances in activities, we also explored the total electricity consumption of each
activity by adding up the electricity consumption of associated appliances during an activity. In
Table 6-7, the average daily total electricity consumption (in kWh) of investigated activities in this
experiment, are presented for three units.
6.4. Discussion
Based on the presented bar chart and table, following insights regarding occupant’s behavior in
tested units can be achieved:
49
Table 6-7 Activities based their average daily total electricity consumption
Figure 6-8 Contribution of appliances in electricity consumption of daily activities for three units
Activity of preparing meals: The first three bars for each unit in Figure 6-8 represent the activity
of preparing meals in different times of the day (breakfast, lunch and dinner). It can be seen from
the chart that although they are representing the same type of activity (preparing meals), the
contributions of appliances are not similar in all three cases. In unit 1, while the electric kettle has
50
the highest contribution in consuming electricity for preparing breakfast, the toaster plays this role
for preparing lunch in this unit. Although the contribution of toaster in preparing dinner and
preparing breakfast is similar, due to an increase in microwave usage, it can be seen that the
contribution of the electric kettle for preparing dinner is 20% less than the one for preparing
breakfast. In unit 2, the contribution of toaster follows the same pattern as unit 1, which has similar
values for preparing breakfast and preparing dinner and a significantly higher value (higher than
all other contributors) for preparing lunch. On the other hand, unlike unit 1, for preparing breakfast
and dinner in unit 2, lights play the role of the highest contributor to electricity consumption. In
unit 3, except for preparing breakfast, in which the coffee maker has the highest contribution, the
main consumption is associated with the electric grill. The frequent use of the electric grill by the
occupant in this unit is the main reason behind the high electricity consumption of the meal
preparing activities (preparing breakfast, lunch and dinner), compared to other units (Table 6-7).
By less frequent use of the electric grill or switching to a more efficient one, this occupant can
considerably reduce the energy consumption of this activity.
Activity of preparing snack: The fourth bar in Figure 6-8 shows the activity of preparing snack.
As illustrated, the occupants of unit 2 and unit 3 have similar routines for snacking, as they both
use the appliances for preparing hot drinks, i.e., electric kettle in unit 2 and coffee maker in unit 3.
The occupant of unit 1, on the other hand, does not desires hot drinks for snacking, as the fridge
light, which is sign for taking food out of fridge, is the only source of electricity consumption in
addition to the kitchen lights in this unit. Accordingly, the energy consumption of preparing snack
in unit 1 is significantly lower than other units.
Activity of grooming: As shown in in Figure 6-8, for activity of grooming, there is 13% difference
between the contribution of lights in unit 1 and the one in unit 2. Since the total electricity
consumption of this activity is only 0.08% higher in unit 2, it can be concluded that in unit 1, the
lights consume more electricity during the activity of grooming compared to the one in unit 2.
Activity of watching television: Watching television is the most electricity consuming activity in
unit 1 and unit 2 and the second most electricity consuming activity in unit 3. However, as
illustrated in Table 6-7, there is a large difference in total electricity consumption value of this
activity among different units (it is 5 and 11 times bigger than unit 1 in unit 3 and unit 2,
51
respectively). The stated variation is mainly due to the difference in hours that an occupant spends
on this activity. Accordingly, it can be concluded that the occupants of unit 2 and unit 3 spend
much more hours on watching television compared to the occupant in unit 1 does. Based on the
bar chart, for activity of watching television in all units, the lights contribute to almost the same
portion of electricity consumption as other associated appliances (television for unit 1 and unit 2
and combination of Xbox and television for unit 3) do. Typically, while watching television, one
does not need a highly bright environment. Hence, by dimming down the high energy consuming
lights, there is a potential to reduce the electricity consumption during this activity. As stated
before, since watching television is among the highest electricity consumption activity in all units,
the explained saving strategy could lead to a considerable amount of savings in a year.
The provided comparison of the activities among different units showed how behaviors might vary
by occupants and also revealed the potentials for energy saving. While achieving some of these
potentials requires behavior change, there are energy saving potentials that can be achieved by
providing automation in the building. For example, as explained in the results, dimming down the
lights during the activity of watching television can potentially save considerable amounts of
energy in long term. Here, energy could be saved either by modifying occupant behavior through
changing the light level manually during this activity or by adding an automation system, which
dims down the light level when the activity of watching television is recognized.
6.5. Conclusion
In this chapter, we presented a framework to allocate appliance-level electricity consumption to
daily activities of occupants via activity recognition. Our presented framework consists of three
main parts: context-aware data separation, segmentation and activity recognition, and electricity
consumption estimation. The input of this framework is the disaggregated appliance usage data
provided by a NILM system. In order to separate overlapping activities, we introduced an
ontology-based approach, based on which the input data are separated into categories with regards
to the context information. Then the separated data are segmented into active and inactive
segments. Next, the active segments are mapped into activities. Finally, the associated electricity
consumption of detected activities is estimated. In order to evaluate our framework, an
experimental validation in three single occupancy apartment units was carried out. The
experimental results showed a total F-measure value of 0.97 for segmentation and an average
52
accuracy of 93.41% for activity recognition. Even though in our proposed framework we used a
single sensing point, the high accuracy achieved through our experimental validation is
comparable with the accuracy of other high-performing activity recognition approaches, which
commonly requires installation of multiple sensors. Following the detection of activities, the
approximate electricity consumption associated with each activity was estimated. The differences
among contribution of appliances in electricity consumption of investigated activities provided us
with insights about different behaviors in performing daily activities in tested units. These insights
might be further used to give energy saving recommendations to occupants.
53
Chapter 7. A Framework for Real Time Activity Recognition and
Wasteful Behavior Detection
7.1. Methodology
Our framework for real-time activity recognition and waste detection consists of three sub-
algorithms: action detection, activity recognition and waste estimation. Figure 7-1 shows the
overview of the framework. As the real-time input, the action detection algorithm receives the data
from the sensing system to detect the occurred actions (e.g., turning on an appliance) using
clustering techniques. Detected actions are then used by the activity recognition algorithm to
recognize activities (e.g., preparing food) through semantic reasoning on our constructed ontology,
which is modeled by means of Description Logic (DL) language. Based on the recognized
activities, the waste estimation algorithm determines the potential waste and accordingly estimates
the potential savings. To better understand the functions of these algorithms and the way they work
together, we first explain the physical components of our framework: the environment and the
sensing system.
Figure 7-1. Overview of the framework.
54
In the context of our framework, an environment is defined as the spaces that are used by occupants
(e.g., an office with multiple workstations or an apartment with various rooms). The sensing
system, consisting of a set of plug meters and sensors that are managed by a controller,
continuously monitors the environment by capturing the environment’s physical properties (e.g.,
power consumption of appliances or light intensity levels in a space) and converting them into
signals. These signals are received by the controller (i.e., the brain of the framework), which attains
the values of virtual nodes assigned to the sensors and plug meters based on a pre-set rate, which
we call sampling rate (e.g., 5 seconds). Using the received signals, the controller detects the
occupant’s current activity and consequently estimates potential energy savings through a set of
procedural analysis. The details are extensively explained in the following sub-sections.
Action Detection
In our framework, actions are defined as particular changes in the environment and they are
executed either by occupants or by artifacts (e.g., an appliance or an automation system). Actions
might affect a physical reality that is being captured by the sensing system. For example, switching
a computer from the state ON to OFF by an occupant is an action that affects the power
consumption of the computer as a physical reality in the environment and it could be captured by
a plug meter. Our action detection algorithm receives these captured effects as input raw data from
the sensing system, at a predetermined rate, which we call detection rate (e.g., 30 seconds). The
input raw data are first preprocessed for extraction of features, as high-level representations of the
raw data. Features that we use here are: a sequence of sensor readings related to a captured fixed-
size window of time for one sensor. Accordingly, at given time 𝒕 , the feature vector 𝒙 𝒕 , associated
with time window 𝒕 is denoted as:
𝒙 𝒕 = [𝒔 𝒕 ′
−(𝒏 −𝟏 )𝜹 ,…,𝒔 𝒕 ′
−𝟐 𝜹 ,𝒔 𝒕 ′
−𝜹 ,𝒔 𝒕 ′] Equation 7-1
( 𝒏 = ⌊
𝒍 𝒘
𝜹 ⌋ 𝒂𝒏𝒅 𝒕 ′
− 𝒕 ≤ 𝜹 )
where 𝒔 𝒕 ′ is the value of sensor reading at time 𝒕 ′
, 𝜹 is the idle time between sensor readings, and
𝒍 𝒘 is the size of the sliding window of time. It is worth mentioning here that in case we have a
sensor reading at time 𝒕 , then 𝒕 ′
equals 𝒕 . Otherwise, 𝒕 ′
is the first time before 𝒕 at which we have
a sensor reading. We are using overlapping sliding windows with step-size of 𝒍 𝒘 /𝟐 , which is equal
55
to the detection rate mentioned previously. For example, let’s assume the size of the sliding
window is 60 seconds. Starting at time zero (i.e., 𝒕 = 𝟎 Seconds), followed by the first feature
extraction (and consequently first action detection) which occurs after 60 seconds (i.e., at 𝒕 = 𝟔𝟎
Seconds), every 30 seconds (e.g., at 𝒕 = 𝟗𝟎 Seconds) the algorithm extracts the features using the
data that are captured during the past 60 seconds (e.g., at 𝒕 = 𝟗𝟎 Seconds, feature vectors are
extracted for the sensing data captured during 𝒕 = 𝟑𝟎 Seconds and 𝒕 = 𝟗𝟎 Seconds).
Following the feature extraction, we reduce the dimensions of the feature space by applying
Principal Components Analysis (PCA). PCA is a multivariate technique that converts a set of
possibly correlated features into uncorrelated features with high variances [112,113]. Given that
we have a 𝒎 × 𝒏 dataset 𝑿 (i.e., the set of feature vectors in the original feature space), in PCA,
we wish to find 𝒌 unit vectors of 𝝓 𝟏 ,𝝓 𝟐 ,…,𝝓 𝒌 (𝒌 ≤ 𝒏 ) such that when the data is projected onto
the directions corresponding to the unit vectors, the variance of the projected data is maximized.
To do so, we should choose 𝝓 𝟏 ,𝝓 𝟐 ,…,𝝓 𝒌 to be the top 𝒌 eigenvalues of the covariance matrix of
𝑿 (assuming that each of the feature vectors (𝒙 𝒊 ) in the matrix 𝑿 has been normalized to have mean
zero). The achieved 𝝓 𝟏 ,𝝓 𝟐 ,… ,𝝓 𝒌 provide a new basis to represent the data. In this new basis, the
feature vector 𝒙 𝒊 is represented as 𝒛 𝒊 = [𝝓 𝟏 𝑻 𝒙 𝒊 ,𝝓 𝟐 𝑻 𝒙 𝒊 ,…,𝝓 𝒌 𝑻 𝒙 𝒊 ]. Each new feature in 𝒛 𝒊 is called a
Principle Component (PC). The number of PCs to use (i.e., 𝒌 ) is decided based on the percentage
of variance in the original data (𝑿 ) that could be explained by the 𝒌 PCs. Since in our study more
than 90% of the variance in the data could be explained by two PCs (for almost all actions), using
additional PCs did not significantly improve the action detection performance. Accordingly, we
decided to use two PCs in our experiments (i.e., 𝒌 = 𝟐 ). Based on Equation 1, in our study, each
feature vector 𝒙 𝒕 contains 𝒏 features (i.e., [𝒙 𝟏 ,𝒙 𝟐 ,…,𝒙 𝒏 ]). By applying PCA, 𝒏 features are
converted into two features (i.e., [𝑷𝑪
𝟏 ,𝑷𝑪
𝟐 ]). Using the achieved principle components as a low-
dimensional representation of the feature vectors, we can improve the performance of action
detection.
The reduced feature vector 𝒙 𝒕 is used to detect the associated action at time 𝒕 using a clustering
model that is previously learned via standard clustering algorithms (i.e., K-Means and Expectation-
Maximization (EM) [112]) through inductive reasoning. Accordingly, a set of unlabeled feature
vectors (i.e., {𝒙 𝟏 ,𝒙 𝟐 ,..,𝒙 𝒎 }) of a previously collected dataset from the sensing system (we call it
here input learning feature vectors) is used to learn the clusters. The K-Means algorithm clusters
56
the input learning feature vectors into 𝑲 disjoint clusters by minimizing the within-cluster sum of
squares:
𝐦𝐢𝐧 ∑ ∑ ‖𝒙 𝒊 − 𝝁 𝒌 ‖
𝟐 𝒙 𝒊 𝝐 𝑪 𝒌 𝑲 𝒌 =𝟏 Equation 7-2
Here, 𝝁 𝒌 is the mean of the data points in cluster 𝑪 𝒌 . On the other hand, the EM algorithm finds
the clusters by learning a Gaussian Mixture Model (GMM). Accordingly, the input feature vectors
are assumed to be generated from a number of Gaussian distributions with unknown parameters
(i.e., mean 𝝁 𝒋 and covariance matrix 𝚺 𝒋 ) that are found through the following two steps of EM
algorithm:
𝑬𝒙𝒑𝒆𝒄𝒕𝒂𝒕𝒊𝒐𝒏 𝒔𝒕𝒆𝒑 : 𝒆𝒔𝒕𝒊𝒎𝒂𝒕𝒆 𝒑𝒓𝒐𝒃𝒂𝒃𝒊𝒍𝒊𝒕𝒊𝒆𝒔 𝒇𝒐𝒓 𝒄𝒍𝒖𝒔𝒕𝒆𝒓 𝒂𝒔𝒔𝒊𝒈𝒏𝒎𝒆𝒏𝒕𝒔
𝑴𝒂𝒙𝒊𝒎𝒊𝒛𝒂𝒕𝒊𝒐𝒏 𝒔𝒕𝒆𝒑 :𝒖𝒑𝒅𝒂𝒕𝒆 𝒕 𝒉𝒆 𝒑𝒂𝒓𝒂𝒎𝒆𝒕𝒓𝒔 𝒐𝒇 𝝁 𝒋 𝒂𝒏𝒅 𝚺 𝒋
These two steps are repeatedly carried out until convergence is achieved (more details on the EM
algorithm could be found in [112]). The difference between K-Means and EM algorithms is that
K-means makes hard cluster assignments (i.e., assigns either 0 or 1 to each feature vector), whereas
the EM makes soft cluster assignments (i.e., assigns a probability between 0 to 1 to each feature
vector). It should be pointed out here that the number of clusters, which should be specified as a
hyperparameter prior to clustering, is equal to the number of action types we want to detect for a
given artifact. For example, the number of clusters in action detection model for a monitor, is set
to be 3, as there are 3 types of actions associated with a monitor (i.e., turn on a monitor, turn off a
monitor, and switch a monitor to standby mode).
Activity Recognition
In our framework, we define activities as combinations of different actions. Accordingly, unlike
the actions, which are detected using the data captured by a single sensor, activities are detected
using the data captured by multiple sensors (e.g., activity of working with computer might be
associated with several sensors including the motion sensor monitoring the workstation, the light
sensor monitoring the light intensity of the workstation, and the plug meters monitoring the power
consumption of the monitor and the computer). To eliminate the need for labeled training data for
activities, our semantic-based activity recognition algorithm carries out deductive reasoning on
57
our implemented ontology for the activities. Below, we provide more details on this ontology first,
followed by an explanation of our algorithm for activity recognition.
Ontology of activities:
Our implemented an ontology that consists of activities with additional contextual information that
are modelled as concepts. These concepts are linked by predefined taxonomic and non-taxonomic
relationships. Figure 7-2 illustrates the main components of the stated ontology.
An ontology contains classes (in Figure 7-2, they are shown by ovals) as sets of instances with
common characteristics and properties (in Figure 7-2, properties are shown by solid arrows and
reversed properties are shown as dashed arrows. For example, the inverse property of isLocatedIn
is locates.) to relate the members of different classes. As shown in Figure 7-2, the main classes of
the constructed ontology are: Scenario, Action, Artifact, Space, Occupant, and WasteType.
Instances of class Scenario represent the currently performed activities. To specify the type of
activities, subclasses are defined within the class Scenario. For example, a scenario, in which
Occupant A starts working with his/her computer and Occupant B leaves the room, in our
ontology, is represented by defining two instances, one in subclass Working_with_computer
(related to Occupant A), and the other in subclass Leaving_workstation (related to Occupant B).
If an occupant performs multiple activities at the same time, multiple instances that are related to
the same instance of class Occupant are created. Scenarios are related to the instances of class
Action that includes different subclasses representing sets of actions. Going back to the previous
example, the activity of Occupant A (i.e., working with computer) could be related to the
occupant’s two consecutive actions of sitting on a chair and turning on a monitor. In our ontology,
the prior is modeled as an instance of subclass Sitting_on_chair and the latter is modeled as an
instance of subclass Turning_on_monitor in class Action. The property of hasAction relates these
instances to the instance in subclass Working_with_computer. Another class that is related to the
scenarios and also actions is Artifact. The members of this class represent the objects in the
environment that are monitored by capturing a physical reality via the sensing system.
Accordingly, in the provided example, two instances in the class Artifact could be defined to
represent Occupant A’s monitor and chair, the states of which are monitored via a plug meter and
a motion sensor, respectively. Finally, for each created artifact, three additional instances for: (1)
58
the space, in which the artifact is located; (2) the occupant by whom the artifact might be used;
and (3) the waste types that are relevant to the artifact, are defined as the members of Space,
Occupant and WasteType classes, respectively. For the previous example, both the created
instances for monitor and chair are located in the same space (i.e., Workstation_1), and are used
by the same occupant (i.e., Occupant_A). In subsection 7.1.3, an explanation for waste types is
provided.
Figure 7-2 Schematic view of the ontology of activities with its main classes and properties.
To model the explained ontology, we used the Web Ontology Language (OWL), which is based
on a family of knowledge representation language called descriptive logic (DL) [106,114]. OWL
makes it possible to model the semantics of concepts in a well-structured format (i.e., DL
knowledgebase), and derive mathematically proven facts using deductive reasoning. The two main
components of the DL knowledgebase are TBox and ABox. TBox, which is the terminological
component of the knowledgebase, holds the descriptive statements about the classes and
properties. On the other hand, ABox, which is the assertional part of the knowledgebase, is related
to the statements about individuals. Assertions of class and property definitions along with some
additional assertions, including subsumption (to support subclass-superclass relationship),
disjointness (to restrict the membership of instances in multiple classes), class restriction (to
restrict the class memberships), and union (to support inclusion-exclusion principle), create the
TBox component of the knowledgebase. The knowledgebase should be constructed initially based
on the information on actions and appliances. For example, given that we want to detect activities
of an occupant in an office, we need to first specify which appliances, he/she has (e.g., a monitor
and a computer) and what operational states these appliances have (e.g., ON, OFF or standby).
59
Based on this information, we specify the number of clusters for actions (e.g., six clusters
representing turn on a monitor, turn off a monitor, switch a monitor to standby mode, turn on a
computer, turn off a computer, and switch a computer to standby mode). In addition, we specify
the activities (e.g., working with a computer) along with their related actions, artifacts, spaces,
occupants and waste types to build the knowledgebase.
Figure 7-3 Activity Recognition (AR) algorithm.
Activity recognition algorithm:
At time 𝒕 , our algorithm for activity recognition receives the detected actions in current window
of time (between time 𝒕 and 𝒕 − 𝒍 𝒘 ) from the action detection algorithm (e.g., sitting on a chair
and turning on a monitor) and accordingly modifies the knowledgebase by appending new
statements into the ABox to represent the current condition. In particular, first a new instance of
class Scenario is created and added to the knowledgebase. Next, the corresponding assertions
60
regarding the occurred actions and current states of appliances and occupancy status are added to
the knowledgebase (e.g., in current scenario, chair and monitor are used by Occupant A for actions
of sitting on a chair and turning on a monitor). Next, the consistency of knowledgebase is checked
by OWL reasoner and consequently, new facts are inferred through reclassing the recently added
instance of the class Scenario to the Scenario’s subclasses (which represent the activities), and
deleting the previously added instances of the class Scenario, in case of activity termination. For
example, given a case, in which the created instance of the Scenario satisfies only the class
restrictions of Working_with_computer, the reasoner deduces that activity of working with a
computer is being performed, since a consistent knowledgebase, which complies with the
assertions in TBox, can be achieved only when the new instance is assigned to the subclass of
Working_with_computer.
Waste Estimation
To estimate the potential waste associated with the activities, the waste estimation algorithm uses
two inputs. The first input is the set of waste estimation policies that are relevant to the appliance
of interest. We decided on these policies based on the typical automation strategies that are used
for each appliance type to reduce the cost of electricity via reducing the energy consumption of
the appliances or operating the appliances in hours with lower electricity rates. Along this line, we
categorize the stated automation strategies into three types. The first type reduces the electricity
cost by shifting electricity load to non-peak hours (i.e., operating appliances in hours with lower
electricity rates), namely for Demand Response (DR). Several utility providers offer non-flat
electricity rates (i.e., higher prices during peak demand hours and lower prices at other times)
[115,116], to encourage users to postpone their energy consuming activities to non-peak hours.
Since not all activities could be postponed (e.g., certain office activities due to their effect on office
workers’ productivity and convenience since peak hours are mainly after working hours), in this
chapter we considered this type of strategy only for certain household appliances (i.e., dishwasher,
washer and dryer). The second type of automation strategy reduces the electricity cost via
controlling the appliance standby power, which results in energy and thus cost savings. Standby
power is the electricity used by appliances while they are not ON but still plugged-in. These
approaches are capable of reducing standby power consumption by 30% [117]. As the majority of
appliances in commercial buildings consume standby power, this strategy offers considerable
61
potential savings. Finally, the third type of automation strategy reduces the electricity cost via
controlling the unneeded appliances that are left on, which impact both safety (e.g., turning off an
unneeded iron at home) and energy/cost savings (e.g., turning off monitors when they are not used
by their users). Typical automation strategies to control lighting fixtures and lamps also fit in this
group.
Figure 7-4 Waste Estimation (WE) algorithm.
Based on the three types of automation strategies (i.e., demand response, standby power control
and left on appliance control), in our framework, we defined three waste estimation policies of
“peak hour usage,” “standby power usage,” and “unneeded left on appliance usage.” As not all
policies apply to all appliances, this information is appended to the ontology explained in the
62
previous section via adding instances in the WasteType class and relating them with the appliances.
Accordingly, for each defined appliance in the ontology, in addition to the location and typical
user, the particular waste estimation policies that are relevant to the appliance are specified and
added to the TBox component of the knowledgebase to be accessed by the waste estimation
algorithm as input (Figure 7-4). As previously explained, while in standby power control and left
on appliance control strategies, the energy consumption is reduced, however in demand response
the energy consumption remains unchanged. For example, activity of using a dishwasher for a
given user requires certain amount of energy, regardless of the time the user performs the activity
(peak hours or non-peak hours). When the activity is postponed from hours with higher electricity
rate to hours with lower electricity rate, despite of the unchanged energy consumption, the lower
electricity rate results in a reduction in electricity cost for that activity. Therefore, for waste
estimation policy of “peak hour usage”, our algorithm estimates the peak-hour energy consumption
of activities that could be potentially shifted to non-peak hours.
The second input to waste estimation algorithm is the recognized activities. For a given appliance,
the waste estimation algorithm receives the energy consumption data from the sensing system.
Based on the recognized activities and waste estimation policies that are applicable to the
appliance, it identifies whether the current consumption of an appliance is considered as waste and
accordingly is a potential cost or energy saving area. For example, our waste estimation algorithm
receives an input from activity recognition algorithm that the activity of working with the computer
for Occupant A is terminated. Since the monitor and computer are still ON and hence consuming
energy, the waste estimation algorithm checks whether the waste estimation policy of “unneeded
left on appliance usage” applicable for these appliances using the activity ontology. As a result,
the waste algorithm reports the current energy consumption of these appliances as instance of
waste. Figure 7-4 shows the explained algorithm. It should be pointed out here that in our
framework, potential waste differs from actual waste. Potential waste is solely detected based on
termination or beginning of activities. For example, when an occupant stops working with the
computer and leaves his/her desk but the computer is still on and it is consuming energy, this is
detected as potential waste that could be saved. However, in this scenario, it might be the case that
a simulation program is still running on the computer and thus, the potential detected waste is not
an actual waste. Extending our framework to differentiate between potential waste and actual
waste will be carried out as part of our future work.
63
7.2. Experimental Study
In order to evaluate the performance of our framework, we carried out an experimental study in a
multi-occupancy office with five occupants and two single occupancy apartments. The simplified
layouts of the testbeds are depicted in Figure 7-5. The testbeds were equipped with our sensing
system prototypes. In each prototype, a set of plug meters to measure the power and energy
consumption of appliances, light sensors to capture ambient light intensity, and binary motion
sensors that were triggered by human motion were controlled via the microcontrollers that were
equipped with XBee modules. XBee is based on ZigBee specification and uses IEEE 802.15.4
networking protocol. Compared to other wireless standards, such as Wi-Fi or Bluetooth, Zigbee
offers a more cost and energy efficient mesh network and hence, it is a better solution for home or
office networks where short-range wireless data transfer is required [118]. The microcontrollers
wirelessly communicated with the master controller and sent the received sensing data as a
combination of binary signals captured by the motion sensors and continuous signals captured by
the plug meters and lights sensors.
The data acquisition was carried out for two weeks in each testbed. The sampling rate of plug
meters and sensors was 5 seconds. During the experiment, the occupants were asked to record their
performed activities along with activity start and finish times by selecting them using a provided
online platform for activity logging. It should be pointed out here that the recorded data by
occupants was later used as the ground truth for validating our algorithms and not for training
purposes.
7.3. Results
In order to learn models for action detection, we used the data of plug meters and sensors. These
data were collected during a two-day to a week data collection period, prior to the experiment. As
stated before, we used clustering algorithms of K-means and EM (GMM) for detecting the actions
associated with the feature vectors consisting of a sequence of sensor readings related to a captured
fixed-length window of time. The length of the time window depends on the acceptable
performance of action detection (i.e., accuracy), and the allowable delay, by which the actions
should be detected. Based on these two factors, in this study, we used a time window of 60 seconds
(i.e., detection rate of 30 seconds).
64
Figure 7-5 Layout of the testbeds
For activity recognition, based on the testbeds’ characteristics, we constructed an ontology for
each, using the Owlready 0.2 package in python [119]. Using HermiT reasoner [120], one of the
most commonly used semantic-based reasoners, we implemented our algorithms for real-time
activity recognition and waste estimation. Following the data collection, the performance of the
action detection and activity recognition was evaluated using three fully labeled datasets that were
generated using three different sensor networks in three testbeds. The ground truth labels for
actions and activities were provided by a combination of occupants’ recorded activity logs and
manual annotations based on the data playback (i.e., plug meters and sensors). A summary of the
performance evaluation of the action detection algorithm using 4 different approaches of K-means,
K-means with PCA, GMM, and GMM with PCA is reported in Table 7-1. In the residential
testbeds, there were 45 and 27 different types of actions in apartment 1 and 2 respectively. In the
office testbed, there were 24 different types of actions. As shown in this table, actions are specified
65
by the state of their associated artifacts or space. For a given artifact/space, the action detection
algorithm should detect the cases, in which the state of the artifact/space changes through an action
(e.g., turning on a television from standby mode or sitting on a chair) and also the cases, in which
the state of the artifact/space remains unchanged since no action has happened. Accordingly, in
calculating the accuracies for each state, we considered the performance of our action detection
algorithm in detecting actions, as well as its performance in detecting cases with no action (i.e., no
action conditions). For example, in Table 7-1, the reported accuracies for the ON state of the
television represents the performance of the algorithm in correctly detecting the following cases:
OFF to ON, Standby to ON, and ON to ON (i.e., no action condition).
In our validation, the actions that are related to the appliances (e.g., a television) were detected
using the feature vectors generated from the data captured by power meters. Also, the action of
opening or closing the fridge door was detected based on the data of power consumption of the
light bulb inside the fridge using a power meter. For detecting the actions associated with a lighting
fixture, the data of the light intensity in the room captured by the light sensors were used. Finally,
for detecting the actions associated with occupying a space, a chair or a bed, the data captured by
motion sensors were used.
Based on the presented accuracies, GMM with PCA outperforms the other approaches in detecting
most of the actions. Overall, the accuracy rates of the four action detection algorithms for
appliances with cyclic operation, such as dishwasher, computer or coffee maker, are mainly lower
than the accuracy rates of the action detection algorithms for appliances with steady operations in
different modes, such as television or toaster. The reason behind this is the confusion of action
detection algorithms in differentiating the transitions between different cycles of appliance
operations that happened in some of the cases. Based on our results, this confusion is considerably
reduced by using the PCA prior to clustering (specially for dishwasher). In general, a fridge draws
power either when the compressor is running, or when the door light turns on (i.e., the door is
opened). Considering the large difference between these two power drawing states of a fridge and
also its power signal that gradually decreases when the compressor stops working, lower accuracy
rates achieved in detecting the action of opening the fridge door, as the associated state of the input
power signal (i.e., door light on) was confused with other states of the input power signal (i.e.,
door light off or compressor running). The lower accuracy rates of action detection for hair iron
66
and light fixture could be explained by the gradual changes in the input signal of the action
detection algorithms (i.e., gradual changes in the power signal for hair iron due to its operation
mechanism and gradual changes in light intensity signal for light fixture due to presence of natural
light), which in turn resulted in confusion of the detection algorithms in differentiating between
different states.
Table 7-1 Performance measurement of the action detection algorithm.
67
1
Mean over all three testbeds.
2
Standard deviation in three testbeds.
3
First achieved principle component.
4
Second achieved principle component.
Using GMM with PCA for action detection, a summary of the performance evaluation of the
activity recognition algorithm is presented in Table 7-2. It should be pointed out here that activity
of using bathroom represents all activities performed in the bathroom, such as brushing teeth and
taking shower. The achieved accuracy, recall and precision indicate that the performance of the
algorithm for a given activity is related to the performance of the action detection algorithm in
detecting the actions that are associated with that activity. For example, as the accuracies for
detecting the actions associated with the appliances used for preparing food/snack (e.g., coffee
maker or microwave) is mainly lower than the accuracies for detecting the actions associated with
the appliances used for watching television (e.g., television or media streaming player), the
accuracy of detecting the activity of preparing food/snack is lower than the accuracy of detecting
the activity of watching television. On the other hand, since a combination of actions builds an
activity, the lower accuracies in detecting some of the actions (e.g., opening the fridge door) could
be compensated by the higher accuracies in detecting the others (e.g., turning on the microwave),
68
which in turn could result in an acceptable performance for recognition of an activity, in this
example, preparing food or snack.
Table 7-2 Performance measurement of activity recognition algorithm.
1
Mean over two occupants of apartment testbeds and over 5 office workers of office testbed.
2
Standard deviation in two occupants of apartment testbeds and in 5 office workers of office
testbed.
In summary, the average accuracies for action detection and activity recognition were 97.6% and
96.7%, respectively. Taking the work done by the authors in [46] and [121], our validation covers
a larger variation of activities, as in [46] only office activities are explored and in [121] only
residential activities are explored. Moreover, unlike the presented validation in [46] where a set of
scripted activities are performed by the subjects, the participants in our validation were living their
normal life. Finally, unlike [121], our proposed framework does not require labeled activity data
for training. Therefore, in terms of the burden on users, our approach has advantage over other
commonly used high performing activity recognition approaches.
Based on the recognized activities, the waste estimation algorithm estimated the waste associated
with the artifacts, using the energy consumption data provided by the plug meters. In order to
compare the occupants’ behavior, activities’ and artifacts’ average daily total energy consumption
69
and percentage of their waste are presented in Tables 3a and 3b. In addition, the cumulative values
of each occupant’s average daily energy consumption and waste (summed over appliances) are
shown in Figures 7-6a and 6b. In Table 7-3b, “idle” represents the conditions in which the occupant
is sleeping or performing an activity other than those investigated in this chapter (e.g., resting). To
estimate the potential waste related to “peak hour usage” policy, in this chapter we used the Time-
of-Use residential electricity rates in Los Angeles, which has three different peak periods and
associated pricing based on the time of day and season (in this experiment we used winter rates)
[122].
The results from the waste estimation in all testbeds (office and residential testbeds) show that an
average of 35.5% of the consumption of an appliance or lighting system could be potentially
reduced and an average of 43.1% of the cost of an appliance usage could be reduced by shifting
the appliance usage from peak hours to non-peak hours. As explained before, although the
estimated waste could be potentially eliminated to reduce the building’s energy consumption and
peak-hour usage by an automation system, a procedure to recognize user preferences is essential
for automation in order to provide a building automation system that could be widely adopted by
building occupants [57].
7.4. Discussion
Office testbed:
As illustrated in Figure 7-6a, Occupant C and occupant E have relatively high consuming behavior
(around 3660 watt-hours). Despite of the similar total energy consumption, in terms of “unneeded
left on usage”, occupant E is more wasteful than occupant C (74.5% of the total energy
consumption for occupant E is “unneeded left on usage” waste compared to 56.5% in occupant C).
On the other hand, in terms of “standby power usage”, occupant C is more wasteful (2.2% of the
total energy consumption for occupant C is “standby power usage” waste compared to 0.9% in
occupant E). Occupant B and occupant D consumed around 2500 watt-hours in total. Despite of
their very close total energy consumption, occupant B’s behavior is more wasteful both in terms
of “unneeded left on usage” and “standby power usage” (67.1% for occupant B versus 55.3% for
occupant D related to “unneeded left on usage” waste and 4.8% for occupant B versus 1.4% for
occupant D related to “standby power usage” waste). Occupant A has relatively low consuming
70
behavior. In terms of waste, occupant A has the most energy efficient behavior among all
occupants, as 23.2% and 0.6% of his/her total consumption is “unneeded left on usage” waste and
“standby power usage” waste, respectively.
71
Table 7-3 Artifacts’ average daily total energy consumption and percentage of waste in office
testbed per each activity. (SP and UL stand for “standby power usage” and “unneeded left on
appliance usage”, respectively.)
1
Percentage of average daily energy consumption of an activity that is wasted.
2
Percentage of average daily energy consumption of an appliance that is wasted due to standby power usage.
3
Percentage of average daily energy consumption of an appliance that is wasted due to left on appliance usage.
Wasted
energy
usage
(%)
Waste
SP
Energy
(%)
Waste
UL
Energy
(%)
Laptop_1 134.92 1.39 6.20
Monitor_1 149.92 2.96 0.48
Lamp_1 643.44 0.00 30.89
34.36 Lamp_1 71.49 0.00 34.36
Laptop_2 196.97 40.37 6.60
Computer_2 1624.35 0.00 100.00
Monitor_a_2 170.57 4.54 13.00
Monitor_b_2 38.79 100.00 0.00
Lamp_2 555.31 0.00 18.82
11.39 Lamp_2 48.29 0.00 11.39
Laptop_a_3 168.14 47.55 6.28
Laptop_b_3 598.57 0.00 70.18
Monitor_3 2133.72 0.00 70.83
Lamp_3 544.03 0.00 20.88
19.13 Lamp_3 65.98 0.00 19.13
Coffee-maker_3 148.34 0.00 0.00
Lamp_3 1.26 0.00 0.00
Computer_4 1644.49 0.00 68.62
Monitor_4 264.85 12.33 5.34
Lamp_4 398.19 0.00 45.30
26.97 Lamp_4 58.16 0.00 26.97
Coffee-maker_4 52.21 0.00 0.00
Lamp_4 1.34 0.00 0.00
Laptop_5 42.05 1.24 8.70
Computer_5 3039.65 0.74 83.56
Monitor_5 298.76 3.58 38.54
Lamp_5 237.09 0.00 29.15
23.94 Lamp_5 8.93 0.00 23.94
Coffee-maker_5 39.02 0.00 0.00
Lamp_5 0.95 0.00 0.00
Office - Occupant A
Activity
23.07
Occupant
Artifact
0.00
58.75
62.00
76.34
0.00
0.00
73.09
Office - Occupant E Office - Occupant B Office - Occupant C Office - Occupant D
1
2 3
72
Table 7-4 Artifacts’ average daily total energy consumption and percentage of waste in
residential testbeds per each activity. (SP, UL and PH stand for “standby power usage”,
“unneeded left on appliance usage” and “peak hour usage” policies, respectively.)
Wasted
energy
usage
(%)
Peak-
hour
usage
(%)
Waste
SP
Energy
(%)
Waste
UL
Energy
(%)
Waste
PH
Cost
(%)
Television 311.93 0.48 21.33 0.00
Light-fixture_living-room 710.06 0.00 45.08 0.00
Laptop 194.62 30.83 14.64 0.00
Monitor 120.49 4.98 38.70 0.00
Light-fixture_living-room 622.38 0.00 38.91 0.00
Coffee-maker 311.70 7.25 47.17 0.00
Kettle 624.44 0.00 65.47 0.00
Toaster 192.08 0.00 0.00 0.00
Fridge-door 6.09 0.00 23.53 0.00
Microwave 188.80 26.22 0.00 0.00
Light-fixture_kitchen 1578.60 0.00 85.92 0.00
27.91 0.00 181.08 0.00 27.91 0.00
Hair-dryer 158.39 0.00 0.00 0.00
Hair-iron 16.78 0.00 0.00 0.00
Lamp_2 488.19 0.00 87.56 0.00
Dishwasher 1012.47 0.00 0.00 48.50
Light-fixture_kitchen 325.78 0.00 79.81 0.00
9.77 51.97 Washer 194.86 9.77 0.00 45.15
2.83 38.59 Dryer 452.53 2.83 0.00 35.73
82.92 0.00 Light-fixture_dining-room 1899.45 0.00 82.92 0.00
Light-fixture_bedroom 508.54 0.00 25.94 0.00
Light-fixture_dining-room 1716.75 0.00 90.83 0.00
Light-fixture_living-room 678.60 0.00 23.37 0.00
Lamp_1 334.57 0.00 45.55 0.00
Television 1075.37 0.05 79.81 0.00
Media-streaming-player 56.08 0.00 82.38 0.00
Light-fixture_living-room 604.81 0.00 12.19 0.00
Laptop 176.27 42.55 5.69 0.00
Monitor 127.74 5.87 54.24 0.00
Light-fixture_living-room 352.05 0.00 10.26 0.00
Rice-cooker 98.14 0.00 5.86 0.00
Fridge-door 2.34 0.00 0.00 0.00
Microwave 197.03 6.45 0.00 0.00
Light-fixture_kitchen 468.43 0.00 74.36 0.00
66.67 0.00 Light-fixture_bathroom 192.53 0.00 66.67 0.00
13.49 0.00 Light-fixture_living-room 869.60 0.00 13.49 0.00
43.28
0.00
0.00
0.00
0.00
Light-fixture_bathroom
47.89
56.37
30.17
Artifact Activity
64.44
0.00
0.00
0.00
0.00
Occupant Apartment 2
37.97
40.89
68.43
61.83
Apartment 1
19.43
1 2
3 4 5
73
1
Percentage of average daily energy consumption of an activity that is wasted.
2
Percentage of average daily energy consumption of an activity that is peak-hour usage that could be shifted to
non-peak hours.
3
Percentage of average daily energy consumption of an appliance that is wasted as standby power usage.
4
Percentage of average daily energy consumption of an appliance that is wasted as left on appliance usage.
4
Percentage of average daily cost of an appliance usage that could be reduces as it is associated with peak-hour
usage.
Figure 7-6 Office occupants’ cumulative average daily electricity consumption and wasted
energy consumption in watt-hours. (SP and UL stand for “standby power usage” and “unneeded
left on appliance usage” policies, respectively.)
Comparing the percentage of waste associated with different activities in Table 7-3a, except for
occupant A, for all other occupants, more than half of the energy consumption for the activity of
working with computer is wasted. Occupant E and occupant B, have the highest and second highest
percentage of waste, both around 70%, for the activity of working with computer, followed by
occupant C and occupant D with around 60% of waste. For the activity of working without
computer, except for occupant A, the estimated waste values are mainly smaller than the waste
values associated with the activity of working with computer (i.e., they are all less than 50%).
Since for each occupant, percentage of waste in lamps are almost identical for the two activities of
74
working with computer and working without computer, it can be concluded that the contribution
of the other appliances (i.e., laptops, computers and monitors) in waste of working with computer
is considerable. There is no waste associated with the activity of preparing a drink (for all
occupants who performed this activity (i.e., occupants C, D and E), as none of the coffeemakers
had a standby mode or a warming mode. In addition, all instances of the activity of preparing a
drink were either simultaneously performed with other activities that required using a lamp (i.e.,
working with or without computer) or such an activity instantly started with the completion of the
activity of preparing a drink. Therefore, the waste associated with lamp is zero for all occupants
for the activity of preparing a drink.
For occupant A, the highest saving potential is related to the occupant’s lamp, for which the waste
is 30% of its consumption. This indicates that there is a high potential for reducing occupant’s
waste by using an automation system to control his/her lamp. For the occupant’s laptop, the
dominant reason of waste is “unneeded left on usage.” Overall, laptops can either be in active
mode or be in standby mode when they are in low power mode or turned off with a fully charged
battery. Accordingly, by using a more efficient low power management plan for occupant A’s
laptop, there is a potential to eliminate a large portion of waste associated with occupant’s laptop.
Occupant B owns two monitors, of which one (Monitor_b_2) of them is always in standby mode
and hence, is not being used. By applying strategies of standby power control via an automation
system, the energy consumption of this monitor could be completely eliminated. In addition, 100%
of the occupant’s computer that is connected to this monitor is related to “unneeded left on usage.”
This implies that the occupant never uses the computer and there is a significant potential of
savings. However, it might be the case that a background operation, e.g., simulations, is running
on the computer and hence, although there is a high potential of savings, the occupant is not willing
to turn off the computer. Getting aware of the potential savings, the occupant could make more
informed decisions to whether give the permission to the automation system to turn the computer
off when it is in use or not. The waste of Monitor_a_2, is mainly resulted from “unneeded left on
usage,” meaning that by turning off the monitor or switching it to standby mode when it is not
being used by an automation system, its consumption potentially can be reduced (by about 15%).
75
The most unique wasteful behavior of occupant C, compared to the other occupants, is related to
the way the occupant uses the monitor. Based on Table 7-3a, occupant C’s monitor consumes
energy about 8 to 15 times more than other occupants’ monitors. Of this amount, almost 70% is
detected as “unneeded left on usage” related waste, which shows that probably the monitor is
always ON, even overnight, and the occupant rarely turns it off. Accordingly, a considerable
amount of energy savings could be achieved using an automation strategy. Occupant C owns two
laptops, the waste of one which (i.e., Laptop_a_3) is mainly related to the “standby power usage.”
The other laptop’s (i.e., Laptop_b_3) consumption can potentially be reduced by 15%, however,
since the whole detected waste is associated with the “unneeded left on usage,” as explained
before, there is a chance that despite of the considerable potential savings, the occupant might have
preferred the laptop to stay ON, due to an ongoing background operation.
Figure 7-7 Residential occupants’ cumulative values of average daily energy consumption,
wasted energy consumption and peak-hour usage that could be shifted to non-peak hours, in
watt-hours. (SP, UL and PH stand for “standby power usage”, “unneeded left on appliances
Approximately 45% of occupant D’s lamp’s consumption is detected as waste, which is the highest
percentage of waste among the lamps owned by the occupants in the testbed. Occupant D’s
computer has the same percentage of waste as occupant C’s laptop does (70%), with this difference
that the average daily consumption of the computer is 3 times more than the consumption of the
76
laptop. Thus, in general, more energy can be saved via automating the Occupant D’s computer and
turn it off when it is not needed.
For occupant E, the waste associated with different artifacts, including occupant’s computer, is
mainly related to “unneeded left on usage.” The fact that the consumption of the computer of
occupant E is almost 3 times the consumption of other occupants’ computers, could be effective
on the occupant’s acceptance of an automation strategy that turns off the computer while it is not
needed, specifically overnight.
Apartment testbeds:
As shown in Figure 7-6b, compared to the occupant in apartment 2, the occupant in apartment 1
has about 3 times higher total energy consumption and 4 times higher total waste, as he/she owns
higher number of appliances. Considering the fraction of different types of waste to the total energy
consumption for each occupant, the occupant in apartment 1 is more wasteful in terms of
“unneeded left on usage” (i.e., about 55% of total consumption for apartment 1 versus 40% of total
consumption for apartment 2), whereas in terms of the “standby power usage”, the occupant in
apartment 2 is twice as wasteful as the occupant in apartment 1 is (i.e., about 1% of total
consumption for apartment 1 versus 2% of total consumption for apartment 2). Since in the
apartment 1, the occupant owns a washer, a dryer and a dishwasher, “peak hour usage” related
waste is also estimated. Accordingly, by performing the activities associated with using these
appliances during off peak hours (i.e., 740 watt-hours load shifting in total), there is a chance to
reduce the cost of the associated appliance usage between 30% to 50%, as shown in Table 7-3b.
As shown in Table 7-3b, considering the energy consumption for lighting in apartment 1 for
different activities and idle condition, the waste associated with the “unneeded left on usage” is
quite high (about 80 to 90%) for the lighting fixtures in the kitchen and the dining room. The
occupant in apartment 2 has a similar wasteful behavior in terms of turning off the unneeded lights
in the kitchen (about 75% of “unneeded left on usage” related waste is estimated for the kitchen
lights in apartment 2). However, the occupant in apartment 1 is more wasteful with regards to the
living room lights compared to the occupant in apartment 2 for different activities and idle
condition. The estimated waste for lighting in other spaces in apartment 1 (i.e., bedroom and
77
bathroom) suggests that the occupant’s behavior in turning off the lights is not as wasteful as the
occupant’s behavior in apartment 2 with about 65% waste for bathroom lights.
Considering the activity of preparing food/drink, both occupants have potential waste associated
with the kitchen appliances in warming states (kettle and coffee maker for the occupant in
apartment 1 and rice cooker for the occupant in apartment 2). The stated waste could be reduced
considerably unless one prefers to keep his/her food or drink warm for a while. In apartment 2, the
high values of the estimated waste associated with “unneeded left on usage” for the appliances that
are used for the activity of watching television (i.e., television and media streaming player) are
indicators of occupant’s wasteful habit in keeping the television on for long hours, even when it is
not needed. Although the total energy consumption of the microwaves in two apartments are
approximately similar, the estimated “standby power usage” related waste for the microwave in
apartment 1 is much higher than the microwave in apartment 2 (about 26% compared to 6%).
Therefore, a standby power control by an automation system could potentially result in more
savings in apartment 2.
For the activity of working with a laptop, the occupant in apartment 1 has higher total energy
consumption than the occupant in apartment 2 does. As the estimated waste values suggest, both
occupants follow the same behavior in terms of the dominant type of waste associated with the
appliances in this activity, meaning that they both have higher “standby power usage” related waste
for the laptop and higher “unneeded left on usage” related waste for the monitor.
7.5. Conclusion
In this chapter, we explored real-time activity recognition to improve energy efficiency in
buildings by introducing an unsupervised framework for real-time activity recognition and wasted
electricity cost and energy consumption detection, using a combination of inductive and deductive
reasoning to eliminate the need for collecting labeled activity data for training while achieving a
high performance. We developed three sub-algorithms: action detection, activity recognition and
waste estimation. In our framework, actions are the particular changes that are executed either by
occupants or by artifacts in the environment. Combinations of different actions create the activities.
As the real-time input, the action detection algorithm receives the data from the sensing system to
detect the occurred actions using unsupervised machine learning algorithms (i.e., Expectation
78
Maximization and Principal Components Analysis) to detect actions. Detected actions are then
used by the activity recognition algorithm to recognize the activities through semantic reasoning
on our constructed ontology, which contains activities along with additional contextual
information modeled as concepts and their relations. For a given appliance, based on the
recognized activities and waste estimation policies that are applicable, the waste estimation
algorithm determines whether the current consumption of the appliance is considered as waste and
accordingly it estimates the potential savings. To evaluate the performance of our framework, three
experiments were carried out, during two weeks, in a testbed office with five occupants and two
single-occupancy apartments, in which the performance of the action detection and activity
recognition was evaluated using the ground truth labels for actions and activities. Results showed
average accuracy of 96.8% for action detection and 97.6% for activity recognition. In addition, the
results from the waste estimation showed that an average of 35.5% of the consumption of an
appliance or lighting system in the testbeds could be potentially reduced.
79
Chapter 8. Contextual and Subjective Factors Effecting Automation
Preferences
8.1. Methodology
We carried out a survey to determine how preferences for level of automation vary by control
contexts as well as individuals’ personalities and demographic characteristics. The control contexts
investigated in our study include rescheduling an energy consuming activity, appliance state
control, and lighting control. Depending on user participation level in control process, for each
context, four levels of automation were defined. Following sections provides more details
regarding this survey study as well as the achieved results.
8.2. Survey Study
For this study, a survey was designed and distributed using online platform of Amazon Mechanical
Turk. Data from 250 respondents were collected and analyzed using Generalized Linear Mixed
Models. Below is a description of the details of this study.
Questionnaire Design
We first determined our dependent and independent variables with regards to our research
questions. Accordingly, three main groups of independent variables, i.e., demographics,
personality and context, and one dependent variable, i.e., automation level, were defined. For the
variable of demographics as a categorical variable, we considered age, gender, marital status,
education level, and income level. These five variables were identified based on extensive
literature review of the factors that impact new technology acceptability. For the variable of
personality, we used the Big Five personality traits [17] as standard representations of personality
dimensions. These traits include extraversion, agreeableness, conscientiousness, neuroticism and
openness to experience. Extraversion is a measure of sociability, such that people with high scores
tend to enjoy company of others, whereas those with low score appreciate being alone more.
Agreeableness measures one’s trusting nature and tendency of being sympathetic. Accordingly,
people with high scores are generally forgiving and compassionate, while those with low scores
are critical and suspicious. Conscientiousness measures how well a person is organized, such that
those with high scores are self-disciplined and careful, while ones with low scores are
80
disorganized. Neuroticism measures the tendency to be anxious about things. People with high
neuroticism are nervous and tend to worry about things, but, in contrast, those with low scores are
calm and relaxed. Finally, openness to experience is a measure of creativity and curiosity, such
that those with high scores are not conservative and always seek new experiences, whereas the
ones with low scores are conventional and uncreative.
To define the context as a variable, we focused on the activities that involve use of appliances and
lighting systems and the associated automation strategies that intend to improve energy efficiency
of these service systems, with respect to occupant needs. For appliances, the automation strategies
can be categorized into three main groups.
The first group are the strategies that aim at performing load control for peak electricity use
periods. During the peak demand hours, utilities generally face difficulties in providing additional
capacity and hence they try to reduce the demand by shifting electricity-using activities from peak
hours to non-peak hours. This approach is often called demand response. To encourage users to
postpone their activities to non-peak hours, several utility providers have been offering non-flat
electricity rates, i.e., higher prices during peak demand hours and lower prices at other times [116].
As residential end uses have shown to significantly contribute to the electricity peak demand [123],
we defined the first two contexts as rescheduling activity of using a dishwasher and rescheduling
activity of using a washer and a dryer.
Another group of automation strategies to improve energy efficiency of appliances are those which
control appliance standby power. Standby power is the electricity used by appliances while they
are not on but still plugged-in. Studies have suggested approaches that are capable of reducing the
standby power consumption by 30% [117]. Accordingly, the third context in our questionnaire was
defined as appliance standby power control. The last group of automation strategies are those that
controls the unneeded appliances that are left on. These approaches have applications both in
safety, e.g., turning off an unneeded iron, and electricity saving, e.g., turning off the computers
when they are not in use [124]. Consequently, the fourth context was defined as turning off the
unneeded left on appliances.
For lighting systems, previous studies have shown that occupancy-based lighting system control
can save up to 30% of lighting electricity consumption [125]. Accordingly, the fifth context in our
questionnaire was defined as turning off the unneeded left on lights. Although, additional
81
electricity savings can be achieved via the use of external natural light sources [126], known as
daylight harvesting, via the application of smart blinds , in order to avoid any complexity for
respondents with different backgrounds to understand the concepts, we did not consider daylight
harvesting in our survey.
As explained before, contexts involve a set of decision making and action implementation
functions (i.e., output functions). These functions can be fully or partially automated. In agreement
with this explanation, for our dependent variable (i.e., automation level), we defined two extreme
cases of full automation and no automation. In full automation, both decision making and action
implementation functions are fully assigned to the automation system and hence, zero user
participation is required. In contrast, in no automation, there is zero level of automation and hence,
all functions are executed manually. Theoretically, from full to zero, the level of automation can
be altered in a continuous range. As depicted in Figure 8-1, the user participation increases as the
level of automation decreases. Accordingly, in between the two extreme cases of full and no
automation, we suggested two levels of automation: inquisitive automation and adaptive
automation. The former is in the region with the higher amount of user participation than the
amount of automation, as opposed to the latter, which is in the region with the lower amount of
user participation than the amount of automation. In inquisitive automation, the automation system
always asks for occupant permission before taking any action. In other words, based on the
decision making function, which is always manual for inquisitive automation, action
implementation function could be either manual or automated depending on occupant decision.
On the other hand, in adaptive automation, automation system learns occupant needs and patterns
in time by occasionally getting feedback from the occupant. This implies that decision making
function in adaptive automation is partially manual and hence, level of automation for action
implementation is also higher than its level in inquisitive automation. It should be pointed out here
that although between full and no automation, more than two levels of automation could be
defined, in order to avoid user confusion in differentiating the condition of giving more control to
the automation system versus the condition in which user has more control, we limited our
investigations to the suggested two levels.
82
Figure 8-1 Comparison of different automation levels in terms of amount of automation and user
participation.
As there were already existing standard questions for measurements of demographics and
personality traits, our main challenge in designing the questionnaire was to find an approach to
well explain the third independent variable, i.e., context, and our only dependent variable, i.e.,
automation level. Due to the complexity of these concepts, prior to asking any questions, we first
introduced the concepts using descriptive animated videos
1
. In each video, we presented a short
explanation of a given context, along with the possible scenarios for different automation levels.
To assure the respondent has watched the videos and understood the concepts before answering
the questions, we designed trap questions in addition to our main questions at the end of each
video. These trap questions required specific answers that could be selected if the responder
carefully watched the videos.
Following the questionnaire design, in order to evaluate the appropriateness, clarity and validity
of the questions and choices, we first pre-tested our survey by conducting a pilot study with several
iterations, using the online platform of Qualtrics. When the respondents responded to the survey
in our pilot testing, they were asked to provide us with the feedback regarding the videos, questions
and also the user interface for the survey. Based on the feedback, we revised and finalized the
questionnaire.
1
https://www.youtube.com/playlist?list=PLJxJX4lDIs9vQEjHBlWLVoVubMmZmzCgV
83
Data Collection
Once the questionnaire was finalized, we conducted our data collection by distributing the survey
via the online platform of Amazon Mechanical Turk. In order to ensure there was no multiplicity
in the responses, we checked the uniqueness of the IP addresses to avoid multiple participations.
All of the respondents were randomly selected from users of Amazon Mechanical Turk who were
residents of the United States and they were given a small amount of monitory reward for their
participation in the survey. The survey was open during 2 months and total of 250 valid responses
were collected. The validity of a response was ensured based on the answers of the respondent to
the designed trap questions explained previously. As these questions required specific answers,
the data associated with invalid responses were discarded. Based on a general rule, for logistic
regression-based analysis, which will be explained in next section, more than 10 observations per
degree of freedom is required [127]. Based on our variables (Table 8-1), the degree of freedom in
our study is 19. Hence, 250 responses satisfy the stated rule.
Statistical Analysis
The collected survey data were first cleaned based on validity of the respondents and coded to get
ready for statistical analysis. The software we used to perform our analysis was SPSS version 24.0.
For the statistical analysis, we ran a set of descriptive and inferential analyses. Descriptive analysis
provided us with a high level of insight on the data and distribution of the answers, while the
inferential analysis offered an estimation of the variables’ effects on the outcome. The appropriate
inferential analysis technique was selected based on the type of variables and the data
characteristics. Table 8-1 presents variable definitions and types. Since investigated variables in
this study were all categorical (either nominal or ordinal), some of the well-established approaches,
such as ANOVA that are suitable for continuous variables, could not be used here [128,129].
Instead, we used a logistic regression-based approach (i.e., Generalized Linear Mixed Model
[130]).
84
Table 8-1 Characteristics of the variables used in this study.
Logistic regression is a special case of generalized linear model, which itself is categorized as type
of regression analysis. The math behind logistic regression is underlain by odds ratio such that log
of the odds is expressed by linear combination of independent variables (Equation 1).
85
𝐥𝐨𝐠 (
𝝅 𝒄 𝝅 𝒓 ) = 𝜶 + 𝜷 𝟏 𝒙 𝟏 + 𝜷 𝟐 𝒙 𝟐 + 𝜷 𝟑 𝒙 𝟑 + ⋯+ 𝜷 𝒌 𝒙 𝒌 Equation 8-1
In Equation 1,
𝝅 𝒄 𝝅 𝒓 is the odds of outcome 𝒄 over outcome 𝒓 , 𝜶 is intercept of the model, and 𝜷 𝒊 is
the coefficient of independent dummy variable 𝒙 𝒊 (it can take values of either 1 or 0), which is
known as a fixed effect.
We had four possible values for the dependent variable (see Table 8-1). There arose the need to
use multinomial logistic regression, which generalizes logistic regression for multiclass cases.
Multinomial logistic regression is a generalized linear model where the outcome has multinomial
distribution and the link function is logit. Given that there are n possible values for the dependent
variable, multinomial logistic regression proceeds by fitting n-1 separate binary logistic models
(Equation 1) to compare the category 𝒄 with a baseline category 𝒓 known as reference category.
In each model, if the log odds ratio is positive the probability of the category 𝒄 will be higher than
reference category and if it is negative the probability of reference category will be higher.
Accordingly, the positive coefficients in each model are in favor of increasing the probability of
category 𝒄 , whereas negative ones are in favor of increasing the probability of the reference
category. As mentioned before, the fixed effects in models must be encoded to dummy variables.
Therefore, given that there are m distinctive possible values for an independent variable, there is a
need for m-1 coefficients for that particular encoded dummy variable in each binary model.
Multinomial logistic regression happens to be a convenient choice for our case. However, as we
took multiple measures per subject, i.e., each measure for one context, our responses were not
independent. To resolve the dependency of the observations, we used an extension of multinomial
linear logistic regression, called Generalized Linear Mixed Model (GLMM). The advantage of
GLMM over logistic regression is that in addition to the fixed effects, it is capable of accounting
for random effects and hence it is a suitable approach for repeated measures. In the GLMM,
random effects are modeled by assuming random intercepts that is, each subject is assigned a
different intercept but similar coefficients.
We initially fitted the GLMM with all independent variables as fixed effects and individual’s
identity as the random effect. We fitted a simpler GLMM with same random effects by
progressively deleting the non-significant fixed effects until all remaining fixed effects were
significant. Comparisons between models were carried out using Akaike Information Criterion
86
(AIC), which is a relative measure for model comparison based on log-likelihood [131,132].
Models with lower AICs are preferred, as they suggest better model fit. In order to test the
significance of the fixed effects in the model selection, we used type III F-test, also known as the
fixed effect test. In type III F-test, for a given fixed effect, the null hypothesis tests if the
coefficients associated with that effect are zero. In case the effect is categorical with more than
two levels, several coefficients are associated with the effect. Hence, the null hypothesis tests
whether all these coefficients are zero. If the p-value, i.e., probability of the null hypothesis being
true, is less than the critical significance level, i.e., 0.05, the null hypothesis gets rejected with 95%
confidence level and it can be concluded that the fixed effect contributes to the model by affecting
the outcome.
Since we had multilevel dependent variable, the final selected GLMM was a combination of binary
models. We used t-test to evaluate the significance of different levels of the fixed effects in binary
models. In t-test, for a given level of a fixed effect, in an estimated binary model, i.e., 𝒙 𝒊 , the null
hypothesis test whether the associated coefficient, i.e., 𝜷 𝒊 , is zero. In case the p-value is less than
0.05, the null hypothesis gets rejected.
8.3. Results
In our sample, about 53% were male and 47% were female. About 45% of the sample were 32
years old or younger, 35% were between 33 and 47 years old, and 20% were 48 years old or older.
The majority of respondents, i.e., about 70%, was single and 30% were married. Finally, about
22% of the respondents were holding less than a Bachelor’s degree, 47% were holding (or were
currently enrolled in) a Bachelor’s degree, and 31% were holding (or were currently enrolled in)
a Master’s degree or Doctorate.
We first carried out a set of descriptive analyses to obtain a high level insight on the data. Along
this line, we explored the measure of data dispersion via attaining percentage of respondents in
each group of automation preference for different contexts (Figure 8-2).
Figure 8-2 suggests that no automation is the least preferred option by the respondents. To confirm
the significance of this statement, in Figure 8-3, we present the cumulative distribution of
responses in each group of automation preferences, along with their 95% Confidence Intervals
(CIs), which are depicted by the error bars [133]. It can be seen that the upper bound of CIs for no
87
automation is lower than 25%, which would be the percentage of preferring an option in case of a
random selection. In addition, the CIs of no automation do not overlap with the CIs of other
automation levels. Accordingly, with 95% confidence, it can be concluded that no automation is
the least preferred option compared to other automation levels.
Figure 8-2 Distribution of respondents in each group of preferred automation type for different
contexts.
For the inferential analysis, we explored the effects of the independent variables on our dependent
variable by constructing GLMMs. Except for no automation, which was selected by considerably
smaller number of respondents, either of the three automation levels, i.e., full automation, adaptive
automation and inquisitive automation, could be an adequate choice for a reference category, as
they were selected with close frequency in total. Accordingly, we constructed three GLMMs with
each of the three automation types as the reference category. Based on the previous explanations,
we initially fitted the GLMMs with all independent variables and iteratively simplified the models.
Table 8-2 summarizes the results of the performed model selection, via presenting the F-values
and p-values. In calculating F-values, the required degrees of freedom were estimated using
Satterthwaite approximation method [134]. As shown in table 8-2, we achieved the final GLMM
for each reference category in 2 iterations. The final models (i.e., GLMM 2 – F, GLMM 2 – A,
88
and GLMM 2 – I) have lower AIC compared to the previous models (i.e., GLMM 1 – F, GLMM
1 – A, and GLMM 1 – I). Moreover, they only contain fixed effects that are significant (p-
value<=0.05).
Figure 8-3 Cumulative distribution of respondents in each group of preferred automation type.
89
Table 8-2 Results of type III F-tests and computed AICs for built GLMMs during model
selection.
a
Final selected model.
* Significant variables (p-value <= 0.05).
90
Table 8-3 Estimated GLMM coefficients and their associated t-values and p-values.
a
Predictor’s reference Category.
b
Context 1: rescheduling an activity (dishwasher), Context 2: rescheduling an activity (washer and dryer), Context
3: managing standby power, Context 4: turning off the unneeded appliances, and Context 5: is turning off the
unneeded lights
** Significant terms (p-value <= 0.05).
* Marginally significant terms (0.05 < p-value < 0.1)
91
Table 8-4 Estimated GLMM coefficients and their associated t-values and p-values.
a
Predictor’s reference Category.
b
Context 1: rescheduling an activity (dishwasher), Context 2: rescheduling an activity (washer and dryer), Context
3: managing standby power, Context 4: turning off the unneeded appliances, and Context 5: is turning off the
unneeded lights
** Significant terms (p-value <= 0.05).
* Marginally significant terms (0.05 < p-value < 0.1)
The final models confirm that among the demographic-related variables, income and education
level is significant, and among the personality-related variables, agreeableness, neuroticism and
openness, are significant.
92
Table 8-3 presents the GLMM coefficients for the fixed effects along with the associated results
of conducted t-tests. As explained before, in GLMM, n-1 binary models, where n is number of
outcome categories, are fitted. Accordingly, we obtained three binary models, each representing
the log odds ratio of a distinctive outcome category, i.e., adaptive automation, inquisitive
automation and no automation, over the outcome’s reference category, i.e., full automation. As
shown in the table, there is no coefficient reported for the last category of each predictor, i.e.,
predictor’s reference category. This can be explained by the structure of the binary models in
GLMM, in which predictors are dummy coded as 0 and 1. For a given predictor, the particular
case of non-reference categories being all zero is the indicator of reference category being 1.
Hence, additional coefficient for predictor’s reference category is redundant.
Based on the estimated models (Table 8-3 and Table 8-4), following observations regarding the
impacts from independent variables are obtained:
Context variable: The achieved p-values confirm that the effects from context variable are
significant (p-value <= 0.05) or marginally significant (0.05 < p-value < 0.1) in all six binary
models. Accordingly, the preference for different automation levels is affected by context in the
following ways:
• Full automation: The coefficients in binary models 1, 2 and 3 for the context variable
suggests that, among different context categories, context 5 (i.e., turning off the unneeded
lights) has the highest probability of choosing full automation over the other options (i.e.,
adaptive, inquisitive, and no automation), and context 4 (i.e., turning off the unneeded
appliances) comes in the second place in this regard. These observations prove the
dominancy of full automation preference in contexts 4 and 5.
• Inquisitive automation: As implied by the coefficients of the context variable in binary
models 2, 4 and 6, among different context categories, the probability of preferring
inquisitive automation over the other options (i.e., full, adaptive, and no automation) is the
highest in context 2 (i.e., rescheduling activity of a dishwasher). Followed by context 2,
context 1 (i.e., rescheduling activity of a washer and a dryer) holds the highest probability
of choosing inquisitive automation over the other options. Accordingly, inquisitive
automation is the dominantly preferred automation level, in context 1 and 2.
93
• Adaptive automation: The relevant models to assess context’s effect on adaptive
automation are binary models 1, 4 and 5. As coefficients in these models suggest, among
different categories of the context variable, context 3 (i.e., managing standby power)
possesses the highest probability of preferring adaptive over the other options (i.e., full,
inquisitive, and no automation). In other words, the dominating preference observed for
context 3 is adaptive automation.
Education level: The p-values in binary models 1, 2, and 3 confirm the marginally significant effect
(0.05 < p-value < 0.1) from education level. As the associated coefficients imply, those with lower
than average level of education are more willing to prefer full automation over other automation
levels compared to the ones with higher than average level of education.
Income level: Base on binary models 1, 3 and 4, those with higher than average level of income
are more willing to choose adaptive automation over the other options (i.e. full, inquisitive and no
automation) compared to the ones with average income level (p-value <= 0.05) and lower than
average income level (0.05 < p-value < 0.1).
Agreeableness: The effect from agreeableness is significant in all binary models related to the odds
ratio for no automation (binary models 3, 5 and 6). As the coefficients in these models suggest,
those with lower levels of agreeableness are more willing to prefer no automation over any kind
of automation (i.e., full, adaptive and inquisitive). In other words, agreeableness positively affects
preferring any kind of automation over no automation. Also, agreeableness has positive significant
effect on preferring adaptive automation over full automation, based on binary model 1.
Neuroticism: For neuroticism, the achieved p-values prove its significant effect in binary models
1,2, 3 and 4. The negative coefficients for lower than average level of neuroticism in first three
stated binary models, which are related to the odds ratio for full automation, show that those low
in this trait are more willing to choose full automation over other options (i.e., inquisitive, adaptive
and no automation) compared to the ones high in this trait. Also, based on binary model 4, those
high in neuroticism are less willing to choose adaptive automation over inquisitive automation
compared to the ones with average level of neuroticism.
94
Openness to experience: Openness to experience has significant effect in binary models 3, 5 and
6, which are all related to the odd ratio for no automation. As coefficients suggest, respondents
with lower level of openness to experience are less open to any type of automation. Therefore, as
openness to experience decreases, the probability of preferring no automation increases.
8.4. Discussion
Our findings demonstrate the willingness of occupants to have some level of automation in their
homes, as no automation choice appeared to be the least preferred option. This probably emerges
from the fact that people have recently gained access to a more matured automation technology in
different aspects of their lives and hence, they have more trust on the concept and applications of
automation. Despite of the common desire for automation, as suggested by previous studies (e.g.,
[28]) and also confirmed by our findings, the preferred automation level is not common across the
different contexts. As indicated by our results, regardless of the effects from other variables (i.e.,
demographic and personality related variables), the highest probabilities of preferring full
automation, inquisitive automation and adaptive automation are, respectively, associated with the
contexts of rescheduling an activity (using a dishwasher or a washer and a dryer), standby power
management, and turning off unneeded appliances and lights.
In line with the explanations we previously provided, the major difference between the automation
levels suggested in this study is the level of user participation required in the automation process.
Accordingly, full automation and adaptive automation are in the region with the higher level of
automation than the user participation, as opposed to inquisitive automation, which is in the region
with the lower level of automation than user participation. Since automation level should be
matched with the function’s workload, full automation and adaptive automation are more suitable
for the contexts with higher levels of workload, whereas inquisitive automation is more appropriate
for the contexts with lower levels of workload. Our findings are in alignment with this argument.
Usually, for an average occupant, the use of a dishwasher or a washer and a dryer is not as frequent
as use of lights and other appliances that are associated with the contexts of managing standby
power and turning off unneeded appliances and lights. Since frequency of a function is an indicator
of its workload, for the context of rescheduling an activity of using a dishwasher or a washer and
a dryer, due to the relatively lower levels of workload, the choice of inquisitive automation
increases occupant control over the automation process without putting an intolerable burden on
95
the occupant. On the other hand, in the contexts of standby power management, and turning off
unneeded appliances and lights, due to the higher levels of workload, full automation and adaptive
automation are more suitable options.
The main difference between adaptive automation and full automation is the learning capability of
the adaptive automation. Accordingly, while in full automation, the automation system operates
based on a set of fixed rules, adaptive automation offers flexibility via learning the occupant
dynamic patterns, in time. Clearly, all of this flexibility comes with the cost of occasional user
participation. Accordingly, our findings could be an indication of the respondents’ desire to accept
more workload in exchange of a flexible automation for standby power control, which can learn
respondents’ dynamic patterns in this context. In contrast, for the context of turning off unneeded
appliances and lights, respondents prefer not to accept any workload, probably due to the adequacy
of a fixed automation in this context.
Among the demographic-related variables (i.e., age, gender, marital status, education level and
income level), our results show that education and income level have marginally significant and
significant effects, respectively, on the automation preference. Based on the derived models, as
level of education and income increases the probability of preferring adaptive automation
increases, whereas, the probability of preferring full automation decreases. Considering the known
association between higher education and higher income [135], this finding shows that individuals
with higher education or income trust more on the concept of adaptive automation, probably due
to satisfactory previous experiences; experiences that apparently people with lower education or
income levels could not have it because of the lack of access to economical adaptive automation
solutions.
Among the personality-related variables (i.e., extraversion, agreeableness, conscientiousness,
neuroticism and openness to experience), our results showed the significant effects of
agreeableness, neuroticism and openness to experience on the automation preference. Based on
our results, both agreeableness and openness to experience have positive effects on preferring any
kind of automation (i.e., full, inquisitive and adaptive automation) over no automation. In addition,
our results suggest that the respondents with high levels of agreeableness are more willing to
choose adaptive automation over full automation. These findings could be explained by the
relationship between one’s personality and general trust, which impacts human response to non-
96
human agents, such as automation system. Based on previous studies, general trust is positively
correlated with the level of agreeableness and openness to experience [136–138]. Accordingly,
people with high levels of agreeableness or openness to experience tend to trust more in unfamiliar
situations, which motivates them to accept new technologies. Evidence on this argument can be
found in studies such as [33] and [139], where, respectively, agreeableness is shown to have
positive effect on accepting mobile commerce and openness to experience is shown to positively
affect using Facebook as a communication tool. Our results also suggest that respondents with
higher level of neuroticism are less willing to prefer full and adaptive automation, but more willing
to choose inquisitive or no automation. In other words, neuroticism negatively affect preference
of the automation levels that are in the region with higher amount of automation than user
participation (Figure 8-1). As individuals high in this trait are prone to negative emotions such as
anxiety, they are low in general trust [136–138]. Accordingly, it might be the case that they do not
trust on full automation to satisfy their convenience and hence, they want to always have the
control over it.
The created models and resulting insights can be used as a starting point to design an automation
system in buildings that better meets user convenience. Our study indicated that automation system
in buildings must be adjustable to provide various levels of automation in different contexts and
providing a single type of automation to everyone, which is typically the case today, does not seem
to be an acceptable solution. In addition, the majority of existing automation solutions (e.g.,
[73,89,126]), particularly for appliance control, do not offer an adequate adaptive automation,
meaning that, they mainly provide either full automation or inquisitive automation. However, as
suggested by our results, for standby power control of appliances, adaptive automation was
associated with the highest probability of being preferred. Therefore, in addition to providing
different levels of automation by context, the automation system must be equipped with a learning
capability to be able to offer adaptive automation.
8.5. Conclusion
In this chapter, we investigated occupant’s automation preferences in different contexts for lighting
system and appliances control in residential buildings. The contexts that we focused in our study
include rescheduling an energy consuming activity, management of different appliance states with
regards to occupant’s activities, and occupancy-based control of lighting systems. For each
97
context, we defined four levels of automation, including full automation, inquisitive automation,
adaptive automation and no automation. A survey of 250 responders was carried out to determine
how the automation preferences vary by personality and demographic-related characteristics,
including big five personality traits and age, gender, marital status, education level and income
level, respectively, in residential buildings. Due to complexity of the automation concepts, we used
descriptive animated videos to explain different concepts to the respondents. In order to analyze
the collected data, we carried out descriptive and inferential analysis using statistical techniques.
Along this line, we obtained a model using Generalized Linear Mixed Model. Our findings
indicated that automation preferences varied by context, such that for rescheduling an activity,
inquisitive automation was the most likely preferred option, whereas for managing appliance
standby power, adaptive automation was the option with highest probability of being preferred and
for turning off the unneeded left on appliances and lights it was full automation. Our findings also
indicated that from demographic-related variables education and income levels’ effect were,
respectively, marginally significant and significant on automation preference. From personality
traits, agreeableness, neuroticism and openness to experience, were found to be significant.
Finally, our investigation showed that in all contexts no automation was the least preferred option.
98
Chapter 9. Activity-driven and User-centered Automation of Appliances
and Lighting Systems in Buildings
9.1. Methodology
The algorithmic components of our proposed automation include: (1) an algorithm for activity
recognition and waste detection (our prior work [45]); (2) an algorithm for planning control
commands (section 9.1.1); (3) an adaptive algorithm for local learning (section 9.1.2); and (4) an
iterative algorithm for global learning (section 9.1.3).
As the real-time input, the activity recognition algorithm receives the data captured by the sensing
system that is installed in the built environment. Our previously developed algorithm, for real-time
activity recognition and waste detection in [45] is based on a hybrid application of ontological and
probabilistic reasoning with three key sub-algorithms: action detection, activity recognition and
waste estimation. Using unsupervised machine learning, the action detection sub-algorithm detects
the occurred actions from the sensing data captured by the sensors. Detected actions are then used
by the activity recognition sub-algorithm to recognize the activities through semantic reasoning.
Finally, based on the recognized activities, the waste estimation algorithm identifies the potential
waste. The details of this algorithm could be found in [45]. The outputs of the activity recognition
and waste detection algorithm is received by the command planning algorithm. When the planning
ends, the automation sends the commands to the microcontrollers (which are hardware units with
embedded software to manage functions of the actuators connected to appliances and lighting
system) and consequently, planned commands are executed by actuators within seconds. In the
proposed command planning procedure, user preferences for automation level are taken into
consideration by means of: (1) constraints in the task network that might change in time to reach
the optimum performance learnt by the iterative global learning algorithm and (2) a model to
predict user preferences per context that is built and updated via the adaptive local learning
algorithm. In following sections, we provide more details on command planning, adaptive local
learning, and iterative global learning algorithms.
99
Dynamic Command Planning
When the activity recognition algorithm recognizes the beginning, or termination of an activity (or
the start or end of wasted energy consumption), by using the HTN planning, our command
planning algorithm selects the appropriate actions (e.g., turn on devices related to the activity,
switch the device to stand by power, etc.), among a set of possible actions, with regards to the
current condition. HTN planning is an approach for automated planning, in artificial intelligence
[140,141]. The main components of HTN planning are states and tasks. While state is a description
of current situation, task is a discretion of actions to perform. There are three types of tasks:
primitive, compound and goal. A primitive task corresponds to a basic action, whereas a compound
task is composed of other simpler tasks (i.e., primitive tasks). The most general compound task is
a goal task, which is specified in terms of conditions that have to be made true via a sequence of
primitive tasks. Constraints among tasks are expressed in the form of a network (i.e., task network).
Accordingly, the task network specifies the condition that is necessary for a primitive or a
compound task to be executed. This way, execution of a given task is feasible only if a set of other
tasks are done and the constraints among them are satisfied, based on the task network. To better
understand different components of the planning process, let us assume a scenario in which we are
trying to control the devices for activity of working with computer, given that the user uses a
computer, a monitor and a desk lamp for this activity. In this example, the goal task could be
controlling the activity of working with computer and the compound tasks could be controlling the
computer, controlling the monitor and controlling the lamp. Let us assume that the user prefers
inquisitive automation (i.e., automation needs user approval before executing the control
commands), adaptive automation (i.e., automation acts based on what it has learnt regarding user
preference for performing a control commands in different condition) and full automation (i.e.,
automation executes the control commands in all conditions) for controlling the computer,
controlling the monitor and controlling the lamp, respectively. Based on the stated automation
levels, the compound tasks could be composed of different primitive tasks including notifying the
user, getting user response (denial/approval), extracting the feature vector from current condition
(section 9.1.2), calling the adaptive automation model to predict user preference
(full/inquisitive/no automation) (section 9.1.2), turn off/turn on/turn to standby mode, and do
nothing, as shown in Figure 9-1.
100
We implemented our algorithm by defining all possible tasks for controlling appliances at different
levels and also the orders in which certain tasks must be performed in an automation level (e.g.,
first wait for user approval and then turn off a device (Figure 9-1)). In section 9.1.2., more details
on our proposed automation levels are provided.
Since the automation preferences might change in time (section 9.1.3), instead of building a static
task network, our command planning algorithm is integrated with the global iterative learning
algorithm (section 9.1.3) to be notified on changes in automation preferences and accordingly
updates the task network by changing the possible set of primitive tasks for the compound tasks in
the task network (i.e., changing the automation level assignments).
Figure 9-1. Example of goal, compound and primitive tasks for command planning. It is assumed
that the user prefers inquisitive automation, adaptive automation and full automation for
controlling the computer, controlling the monitor and controlling the lamp, respectively
Learning Preferences for Different Contexts and Conditions
Based on the literature, a user’s preference for the level of automation varies by context in which
automation actions are performed (e.g., the context of controlling a computer for activity of
working with computer) and conditions (i.e., temporal and situational conditions) [11,14,21,27].
101
Situational conditions might be related to the duration of activities and occurrence of concurrent
or consecutive activities, and so on. For example, a user might prefer to continue to keep her
computer on even though she is not present because of a simulation program running in the
background. Temporal conditions are the variations in preferences due to time-related preference
differences (e.g., variations based on time-of-the-day and day-of-the-week). For example, a user
might prefer a different level of automation for the activity of washing dishes during the weekdays
vs. during the weekend. In order to identify the appropriate level of automation in different
contexts and conditions, since there was no existing taxonomy specific to the building automation
domain, we have created our own taxonomy for the level of automation in our previous work as
follows [57]:
Level 1: No automation (Fully manual): The automation system offers no assistance (zero
automation and maximum user participation).
Level 2: Inquisitive automation: The automation system asks for user approval prior to performing
the control commands in all conditions (higher user participation than automation).
Level 3: Adaptive automation: The automation system learns user’s preferences for performing a
control command across different conditions by primarily offering inquisitive automation and
reducing user participation in time (higher automation than user participation).
Level 4: Full automation: The automation system performs the control commands in all conditions
(maximum automation and zero user participation).
The main difference between the adaptive automation level and other automation levels is that: in
fully manual, inquisitive and full automation levels, the policy for performing control commands
is identical across different conditions; however, in adaptive automation level, the policy for
performing the control commands changes across different temporal and situational conditions. In
other words, several factors might be involved in making an activity-related automation command
acceptable for the user in some conditions and unacceptable in others. To learn the stated
conditions, we developed an adaptive algorithm that learns user preferences for automation
commands under different conditions.
When automation system is deployed in the built environment for the first time, preferred
automation levels are assigned initially for different contexts. Contexts are the compound tasks
102
(associated with different goal tasks) that are determined to build the task network for command
planning (section 9.1.1). The initial assignment of the level of automation for specified contexts
are decided based on the findings from our previous study in [57].
Adaptive local learning: Our adaptive local learning algorithm learns the model 𝒉 𝜽 (𝒙 ), where 𝜽
denotes the model parameters, to predict the target variable 𝒚 𝒊 , which is user’s preferred
automation level, using the input variable 𝒙 𝒊 , which represents the current conditions (𝒙 𝒊 =
[𝒙 𝟏 𝒊 ,𝒙 𝟐 𝒊 ,…,𝒙 𝒌 𝒊 ]). We assume three possible classes for the target variable as: full automation,
inquisitive automation, and no automation; and seven input features as: type of the current
started/terminated activity, duration of the current activity, type of the next activity, duration of
the next activity, type of the appliance to be controlled, time of the day, and day of the week. Each
time beginning, or termination of an activity is recognized, automation can decide on the control
commands using the model 𝒉 (𝒙 ) (i.e., prediction of a new data point). For example, given that the
current conditions are: “It is Monday morning and the duration of the activity of turning off the
computer was 15 minutes,” the adaptive algorithm converts the conditions into a feature vector
(i.e., 𝒙 𝒊 ) and the model receives it as input features and predicts the target variable such as: “user’s
approval is required before turning off the computer” (i.e., output of the model is inquisitive
automation) or “user does not want the computer to be turned off” (i.e., output of the model is no
automation).
Using online machine learning approaches (i.e., stochastic gradient decent or stochastic gradient
boosting as mini-batch methods [112,113]), our algorithm incrementally learns the model 𝒉 (𝒙 ) in
consecutive rounds (i.e., at the beginning of each day) via minimizing the loss function (i.e.,
measure of how far the predicted outputs are from the actual values). When adaptive automation
is assigned to a given context for the first time, prior to executing a control command in that
context, the automation system asks for user preference (i.e., approval or denial) for executing the
command and acts accordingly (as it does in inquisitive automation). User’s responses to
automation requests along with their associated conditions (we call them observed user data) are
recorded to be used during the model updating process.
In stochastic gradient decent, 𝒉 (𝒙 ) is a logistic regression model or a linear Support Vector
Machine (SVM) classifier, with model parameters of 𝜽 = {𝒘 ,𝒃 }, where 𝒘 denotes the weights
and 𝒃 denotes the intercept. The algorithm initializes the model parameters as zero (𝒉 𝟎 (𝒙 )=0).
103
Starting from the next day, in the beginning of each round (i.e., day) the adaptive algorithm uses
the observed data during the previous day (mini-batch of input and output pairs: {(𝒙 𝒊 ,𝒚 𝒊 );𝒊 =
𝒏 ,…,𝒎 }) as the training set to calculate the model error and accordingly update the model
parameters for subsequent rounds as follows:
𝒘 𝒅 ← 𝒘 𝒅 −𝟏 − 𝜼 𝛁 (
𝟏 𝒎 −𝒏 +𝟏 ∑ 𝑳 (𝒉 (𝒙 𝒊 ),𝒚 𝒊 )
𝒎 𝒊 =𝒏 ) Equation 9-1
𝒃 𝒅 ← 𝒃 𝒅 −𝟏 − 𝜼 𝛁 (
𝟏 𝒎 −𝒏 +𝟏 ∑ 𝑳 (𝒉 (𝒙 𝒊 ),𝒚 𝒊 )
𝒎 𝒊 =𝒏 ) Equation 9-2
where 𝜼 is the learning rate (gets a value between 0 and 1), 𝒎 − 𝒏 + 𝟏 is the mini-batch size, and
𝑳 is a loss function that measures model error/fit. In above equations, use of “log” loss function
entails logistic regression, whereas use of “hinge” loss function (soft-margin) entails SVM
classifier.
In stochastic gradient boosting, 𝒉 (𝒙 ) is 𝑲 coupled classification tree expansions, where 𝑲 is
number of classes (here is 3) and each tree has model parameters of 𝜽 = {𝑹 𝒋 ,𝜸 𝒋 }
𝟏 𝑱 , where 𝑹 𝒋
denotes the disjoint regions that the space is portioned to and 𝜸 𝒋 is a constant assigned to each
region. The algorithm initializes 𝒉 (𝒙 ) as zero: 𝒉 𝟎 𝟏 (𝒙 ) = ⋯ = 𝒉 𝟎 𝑲 (𝒙 ) = 𝟎 . Starting from the second
day, in each round 𝒅 (i.e., beginning of each day), the algorithm constructs 𝑲 trees. Each tree 𝑻 𝒅 𝒌
is fit to its respective negative gradient vector:
𝒈 𝒅 𝒌 (𝒙 𝒊 ) =
𝝏𝑳 (𝒚 𝒊 ,𝒉 𝒅 −𝟏 𝒌 (𝒙 𝒊 ))
𝝏 𝒉 𝒅 −𝟏 𝒌 (𝒙 𝒊 )
Equation 9-3
where the loss function in Equation 3 is “multinomial deviance”. The constructed trees are used to
update the 𝒉 (𝒙 ):
𝒉 𝒅 𝒌 (𝒙 ) ← 𝒉 𝒅 −𝟏 𝒌 (𝒙 )+ 𝜼 𝑻 𝒅 𝒌 Equation 9-4
where 𝜼 is the learning rate (gets a value between 0 and 1).
When the model error reduces to an acceptable value (e.g., a prediction accuracy of more than
85%), the automation system starts to execute the control commands based on the learnt model
rather than asking for user approval. From this point, in addition to user’s responses to automation
requests (happens when the output from the model is inquisitive), the automation functions that
are overridden by user are also recorded to be used during the updating rounds.
104
Learning the Changes in Preferences in Time
As user experiences automation, his/her preferences might change in time (e.g., due to
increased/decreased trust in/satisfaction with automation) [16,29–31]. In order to support
persistent and long-term autonomy, we developed an iterative global learning, in which automation
updates the assigned automation levels in different contexts according to the changes in user
preferences. Through this process, the activity knowledgebase and the task network for control
commands is updated in case of an occurred change in time. Using reinforcement learning, our
algorithm iteratively (1) selects a control policy (i.e., automation level) and performs the control
commands based on the selected policy, (2) measures the reward of current policy (based on two
criteria of user satisfaction, and achieved benefit of using automation), and (3) updates the policy
by taking an action that maximizes the reward in the next iteration (e.g., changing automation level
assignment in a given context and updating the task network for planning).
To better understand the function of our iterative global algorithm, let us assume a scenario in
which the automation level to control a monitor is initially set to fully automated (user’s initial
preference prior to experiencing the automation). Accordingly, automation turns the monitor to
standby mode whenever the user stops working with the computer (even for short intervals) and
brings it back on when he/she starts to work with the computer again. Assuming that the user is
not satisfied with the performance of the automation system, he/she overrides several functions
executed by the automation (i.e., the monitor is turned on by the user just after it is turned to
standby mode by the automation). Due to several overridden functions, a low user satisfaction is
estimated by our algorithm. Accordingly, the algorithm suggests a decrease in automation level
(from full to adaptive automation). If it is accepted by the user, in the next iteration the level of
automation for this context (i.e., controlling the monitor) is set to adaptive automation. The
adaptive automation controls the monitor based on the model, which is trained through local
learning. Based on this model, automation does not turn the monitor to standby mode in cases of
short intervals between the activities of working with a computer. As a result, the number of
overridden functions decreases and the calculated user satisfaction reward increases.
Iterative global learning: Our iterative global learning algorithm uses a slightly modified version
of Q-learning algorithm, which is a model-free reinforcement technique [142]. This technique
works by learning an action-value function, i.e., 𝑸 (𝒔 𝒊 ,𝒂 𝒋 ), which gives the value of action 𝒂 𝒋 ,
105
given that the current state is 𝒔 𝒊 . In our case, states are the assigned automation levels, which are
the control policies, and actions are the changes between the automation levels (e.g., changing
from no automation to inquisitive automation). The diagram in Figure 9-2 shows all possible states
and actions we assumed.
Figure 9-2. All possible states and actions
Along this line, the algorithm learns the 𝑸 matrix:
(𝑵 = 𝑵𝒐 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 ,𝑰 = 𝑰𝒏𝒒𝒖𝒊𝒔𝒊𝒕𝒊𝒗𝒆 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 ,𝑨 = 𝑨𝒅𝒂𝒑𝒕𝒊𝒗𝒆 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 ,𝑭 = 𝑭𝒖𝒍𝒍 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 )
Each element in this matrix (e.g., 𝑸 (𝒔 𝒊 ,𝒂 𝒋 )) represents the value associated with changing the
assigned automation level from 𝒔 𝒊 to 𝒔 𝒋 via action 𝒂 𝒋 . As shown in the matrix, we assign values
only to possible actions in each state. For example, for state of full automation, the possible actions
are changing the automation level to adaptive automation or staying in full automation level.
Our algorithm learns 𝑸 (𝒔 𝒊 ,𝒂 𝒋 ) using the following equation:
𝑸 (𝒔 𝒊 ,𝒂 𝒋 ) = 𝒓 𝒂𝒄𝒕𝒊𝒐𝒏 (𝒔 𝒊 ,𝒂 𝒋 ) + .𝐌𝐚𝐱 [𝑸 (𝒔 𝒋 ,𝒂𝒍𝒍 𝒑𝒐𝒔𝒔𝒊𝒃𝒍𝒆 𝒂𝒄𝒕𝒊𝒐𝒏𝒔 )] Equation 9-
5
106
The parameter gets value between 0 and 1. If is closer to 1, the algorithm will consider higher
weights for rewards from future actions compared to when it is closer to 0. We estimate the reward
of action 𝒂 𝒋 given that we are in state 𝒔 𝒊 (i.e., 𝒓 𝒂𝒄𝒕𝒊𝒐𝒏 (𝒔 𝒊 ,𝒂 𝒋 )) with regards to the policy reward
value in daily basis.
The reward of a given control policy (i.e., 𝒓 𝒑𝒐𝒍𝒊𝒄𝒚 (𝒔 𝒊 )) is calculated by finding the weighted sum
of two measured reward values: (1) user satisfaction; and (2) achieved benefit of using automation
(i.e., energy saving):
𝒓 𝒑𝒐𝒍𝒊𝒄𝒚 (𝒔 𝒊 ) = 𝒘 𝑼𝑺
.𝒓 𝑼𝑺
+ 𝒘 𝑨𝑩
.𝒓 𝑨𝑩
Equation 9-6
where 𝒓 𝑼𝑺
is the measured reward from user satisfaction, 𝒓 𝑨𝑩
is the measured reward from the
achieved benefit, and 𝒘 𝑼𝑺
and 𝒘 𝑨𝑩
are predetermined weights associated with 𝒓 𝑼𝑺
and 𝒓 𝑨𝑩
,
respectively.
To measure the reward from user satisfaction (𝒓 𝑼𝑺
), our algorithm quantifies user’s dissatisfaction
by giving penalty to the times user is asked to communicate with the automation (i.e., user
participation), and more importantly, the times the functions executed by automation are
overridden by user, in daily basis. Accordingly, we give 𝜶 𝟏 positive points (e.g., 3 points) to
correctly executed functions with no user participation (𝑭 𝑪𝒐𝒓𝒓𝒆𝒄𝒕 −𝑵𝒐 𝑼𝑷
), 𝜶 𝟐 positive points (e.g.,
2 points) to correctly executed functions via user participation (𝑭 𝑪𝒐𝒓𝒓𝒆𝒄𝒕 −𝑼𝑷
), and 𝜷 negative points
(e.g., -3 points) to overridden functions (𝑭 𝑶𝒗𝒆𝒓𝒓𝒊𝒅𝒆𝒏 ). Accordingly, 𝒓 𝑼𝑺
in each day are calculated
as follows:
𝒓 𝑼𝑺
=
[
𝜶 𝟏 ×(𝑵𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝑭 𝑪𝒐𝒓𝒓𝒆𝒄𝒕 −𝑵𝒐 𝑼𝑷
)+𝜶 𝟐 ×(𝑵𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝑭 𝑪𝒐𝒓𝒓𝒆𝒄𝒕 −𝑼𝑷
)−𝜷 ×(𝑵𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝑭 𝑶𝒗𝒆𝒓𝒓𝒊𝒅𝒆𝒏 )
(𝑵𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝑭 𝑪𝒐𝒓𝒓𝒆𝒄𝒕 −𝑵𝒐 𝑼𝑷
+𝑵𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝑭 𝑪𝒐𝒓𝒓𝒆𝒄𝒕 −𝑼𝑷
+𝑵𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝑭 𝑶𝒗𝒆𝒓𝒓𝒊𝒅𝒆𝒏 )
]
𝑵𝒐𝒓𝒎𝒂𝒍𝒊𝒛𝒆𝒅 × 𝟏𝟎𝟎
Equation 9-7
To measure the reward from achieved benefit of using automation (i.e., 𝒓 𝑨𝑩
), over a given
predefined time, (i.e., a day), our algorithm estimates the percentage of achieved energy saving
(𝒂𝒄𝒉𝒊𝒆𝒗𝒆𝒅 _𝑬𝑺 ) from the maximum potential energy saving that could have been achieved by
using full automation (𝒑𝒐𝒕𝒆𝒏𝒕𝒊𝒂𝒍 _𝑬𝑺 ). Accordingly, 𝒓 𝑨𝑩
in each day is calculated as follows:
𝒓 𝑨𝑩
=
𝒂𝒄𝒉𝒊𝒆𝒗𝒆𝒅 _𝑬𝑺
𝒑𝒐𝒕𝒆𝒏𝒕𝒊𝒂𝒍 _𝑬𝑺
× 𝟏𝟎𝟎 Equation 9-8
107
To map policy rewards to action rewards, our algorithm calculates the average daily policy reward
over a predefined period of time, which we call reward calculation period (𝑻 𝑹 ):
𝑹 𝒑𝒐𝒍𝒊𝒄𝒚 =
[
𝒓 𝒑𝒐𝒍𝒊𝒄𝒚 (𝒔 𝑭 ) = 𝒎𝒆𝒂𝒏 (𝒓 𝒑𝒐𝒍𝒊𝒄𝒚 (𝒔 𝑭 )
𝒅 )
𝒓 𝒑𝒐𝒍𝒊𝒄𝒚 (𝒔 𝑨 ) = 𝒎𝒆𝒂𝒏 (𝒓 𝒑𝒐𝒍𝒊𝒄𝒚 (𝒔 𝑨 )
𝒅 )
𝒓 𝒑𝒐𝒍𝒊𝒄𝒚 (𝒔 𝑰 ) = 𝒎𝒆𝒂𝒏 (𝒓 𝒑𝒐𝒍𝒊𝒄𝒚 (𝒔 𝑰 )
𝒅 )
𝒓 𝒑𝒐𝒍𝒊𝒄𝒚 (𝒔 𝑵 ) = 𝒎𝒆𝒂𝒏 (𝒓 𝒑𝒐𝒍𝒊𝒄𝒚 (𝒔 𝑵 )
𝒅 )
]
(𝒅 = 𝟏 ,…,𝑻 𝑹 )
(𝑵 = 𝑵𝒐 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 ,𝑰 = 𝑰𝒏𝒒𝒖𝒊𝒔𝒊𝒕𝒊𝒗𝒆 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 ,𝑨 = 𝑨𝒅𝒂𝒑𝒕𝒊𝒗𝒆 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 ,𝑭 = 𝑭𝒖𝒍𝒍 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 )
For each action 𝒂 𝒋 in state 𝒔 𝒊 , the difference (could be negative or positive) between the average
policy reward values of 𝒔 𝒊 (i.e., current state) and 𝒔 𝒋 (i.e., next state) is assumed as the associated
action reward value (Equation 7).
𝒓 𝒂𝒄𝒕𝒊𝒐𝒏 (𝒔 𝒊 ,𝒂 𝒋 ) = 𝒓 𝒑𝒐𝒍𝒊𝒄𝒚 (𝒔 𝒋 ) − 𝒓 𝒑𝒐𝒍𝒊𝒄𝒚 (𝒔 𝒊 ) Equation 9-9
Accordingly, in time, as policy reward values are estimated, our algorithm calculates the action
reward values in the 𝑹 𝒂𝒄𝒕𝒊𝒐𝒏 matrix:
(𝑵 = 𝑵𝒐 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 ,𝑰 = 𝑰𝒏𝒒𝒖𝒊𝒔𝒊𝒕𝒊𝒗𝒆 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 ,𝑨 = 𝑨𝒅𝒂𝒑𝒕𝒊𝒗𝒆 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 ,𝑭 = 𝑭𝒖𝒍𝒍 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 )
where 𝒓 𝒂𝒄𝒕𝒊𝒐𝒏 (𝒔 𝒊 ,𝒂 𝒋 ) represents the action reward value associated with action of changing the
assigned automation level from 𝒔 𝒊 to 𝒔 𝒋 .
The following procedure summarizes how our algorithm calculates the 𝑸 matrix (i.e., exploration
procedure):
Step 1. Set 𝒘 𝑼𝑺
,𝒘 𝑨𝑩
,𝜶 𝟏 ,𝜶 𝟐 ,𝜷 , ,𝑻 𝑹 ,𝚬 ,𝛆 :
108
• 𝒘 𝑼𝑺
and 𝒘 𝑨𝑩
represent the weights the algorithm considers for rewards from user
satisfaction and achieved benefit and must be chosen such that 𝒘 𝑼𝑺
+ 𝒘 𝑨𝑩
= 𝟏
• 𝜶 𝟏 ,𝜶 𝟐 and 𝜷 represent the relative weights the algorithm considers for
𝑭 𝑪𝒐𝒓𝒓𝒆𝒄𝒕 −𝑵𝒐 𝑼𝑷
, 𝑭 𝑪 𝒐𝒓𝒓𝒆𝒄𝒕 −𝑼𝑷
and 𝑭 𝑶𝒗𝒆𝒓𝒓𝒊𝒅𝒆𝒏 , respectively, and must be set to
integer values
• represents the weight the algorithm considers for the reward from future actions
and must be set to a value between 0 and 1
• 𝑻 𝑹 is number of days during which the level of automation is fixed for calculating
the average daily reward
• 𝜠 is the lower limit for the percentage change of the learnt 𝑸 values. The algorithm
repeats the inner loop till the calculated percentage change exceeds 𝜠
• 𝜺 is the percentage change of the learnt 𝑸 values calculated in each round of the
inner loop. 𝜺 is initially set to ∞.
Step 2. Initialize matrix 𝑸 :
𝑸 =
𝒔𝒕𝒂𝒕𝒆 \𝒂 𝒄 𝒕𝒊𝒐𝒏 𝑭 𝑨 𝑰 𝑵 𝑭 𝑨 𝑰 𝑵 [
𝟎 𝟎 − −
𝟎 𝟎 𝟎 −
−
−
𝟎 −
𝟎 𝟎 𝟎 𝟎 ]
(𝑵 = 𝑵𝒐 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 ,𝑰 = 𝑰𝒏𝒒𝒖𝒊𝒔𝒊𝒕𝒊𝒗𝒆 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 ,𝑨 = 𝑨𝒅𝒂𝒑𝒕𝒊𝒗𝒆 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 ,𝑭 = 𝑭𝒖𝒍𝒍 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 )
Step 3. Initialize policy rewards in 𝑹 𝒑𝒐𝒍𝒊𝒄𝒚 and action rewards in 𝑹 𝒂𝒄𝒕𝒊𝒐𝒏 matrices:
𝑹 𝒑𝒐𝒍𝒊𝒄𝒚 =
𝑭 𝑨 𝑰 𝑵 [
∞
∞
∞
∞
]
𝑹 𝒂𝒄𝒕𝒊𝒐𝒏 =
𝒔𝒕𝒂𝒕𝒆 \𝒂𝒄𝒕𝒊𝒐𝒏 𝑭 𝑨 𝑰 𝑵 𝑭 𝑨 𝑰 𝑵 [
∞ ∞
− −
∞ ∞ ∞ −
−
−
∞
−
∞
∞
∞
∞
]
(𝑵 = 𝑵𝒐 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 ,𝑰 = 𝑰𝒏𝒒𝒖𝒊𝒔𝒊𝒕𝒊𝒗𝒆 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 ,𝑨 = 𝑨𝒅𝒂𝒑𝒕𝒊𝒗𝒆 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 ,𝑭 = 𝑭𝒖𝒍𝒍 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 )
Step 4. For each training session:
109
1. Set initial automation level (𝒔 𝟎 ) equal to full automation, adaptive automation, inquisitive
automation or no automation
2. Find the set of valid actions (𝒂𝒄𝒕𝒊𝒐𝒏 𝒔 𝒗𝒂𝒍𝒊𝒅 (𝒔 𝟎 )
) for initial state 𝒔 𝟎 , which are actions with
positive action reward values in 𝑹 𝒂𝒄𝒕𝒊𝒐𝒏 matrix for 𝒔 𝒊 :
𝒂𝒄𝒕𝒊𝒐𝒏 𝒔 𝒗𝒂𝒍𝒊𝒅 (𝒔 𝟎 )
= { 𝒂 𝒋 | 𝒂 𝒋 ∈ 𝒑𝒐𝒔𝒔𝒊𝒃𝒍𝒆 𝒂𝒄𝒕𝒊𝒐𝒏𝒔 𝒇𝒐𝒓 𝒔 𝟎 }
3. Set the initial state as current state (𝒔 𝒊 ) and its set of valid actions as set of valid actions for
current state:
𝒔 𝒊 ← 𝒔 𝟎
𝒂𝒄𝒕𝒊𝒐𝒏 𝒔 𝒗𝒂𝒍𝒊𝒅 (𝒔 𝒊 )
← 𝒂𝒄𝒕𝒊𝒐𝒏 𝒔 𝒗𝒂𝒍𝒊𝒅 (𝒔 𝟎 )
4. Do While 𝜺 > 𝚬 :
i. Calculate policy rewards for each day, during 𝑻 𝑹 (Equation 6,7, and 8):
{𝒓 𝒑𝒐𝒍𝒊𝒄𝒚 (𝒔 𝒊 )
𝟏 ,… ,𝒓 𝒑𝒐𝒍𝒊𝒄𝒚 (𝒔 𝒊 )
𝑻 𝑹 }
ii. Update the 𝑹 𝒑𝒐𝒍𝒊𝒄𝒚 matrix by replacing the reward policy value of current state
in 𝑹 𝒑𝒐𝒍𝒊𝒄𝒚 with the mean of the daily policy rewards
iii. Update the matrix 𝑹 𝒂𝒄𝒕𝒊𝒐𝒏 by mapping daily policy reward values from 𝑹 𝒑𝒐𝒍𝒊𝒄𝒚
to action reward values in 𝑹 𝒂𝒄𝒕𝒊𝒐𝒏 (Equation 9)
iv. Randomly select one among all possible actions for the current state and using
this action consider going to the next state (𝒔 𝒋 )
v. Find the set of valid actions for next state 𝒔 𝒋 :
vi. 𝒂𝒄𝒕𝒊𝒐𝒏 𝒔 𝒗𝒂𝒍𝒊𝒅 (𝒔 𝒋 )
= { 𝒂 𝒌 | 𝒂 𝒌 ∈ 𝒑𝒐𝒔𝒔𝒊𝒃𝒍𝒆 𝒂𝒄𝒕𝒊𝒐𝒏𝒔 𝒇𝒐𝒓 𝒔 𝒋 }
vii. Calculate 𝑸 𝒏𝒆𝒘 (𝒔 𝒊 ,𝒂 𝒋 ) considering immediate reward (𝒓 𝒂𝒄𝒕𝒊𝒐𝒏 (𝒔 𝒊 ,𝒂 𝒋 )) and
future reward (maximum reward that could be achieved by performing an
actions in 𝒂𝒄𝒕𝒊𝒐𝒏 𝒔 𝒗𝒂𝒍𝒊𝒅 (𝒔 𝒋 )
) (Equation 5)
viii. Calculate 𝜺 by finding the percentage change in 𝑸 (𝒔 𝒊 ,𝒂 𝒋 ):
𝜺 =
|𝑸 𝒏𝒆𝒘 (𝒔 𝒊 ,𝒂 𝒋 ) − 𝑸 (𝒔 𝒊 ,𝒂 𝒋 )|
|𝑸 (𝒔 𝒊 ,𝒂 𝒋 )|
× 𝟏𝟎𝟎
ix. Update 𝑸 matrix by replacing 𝑸 (𝒔 𝒊 ,𝒂 𝒋 ) with 𝑸 𝒏𝒆𝒘 (𝒔 𝒊 ,𝒂 𝒋 )
x. Set the next state as current state and its set of valid actions as set of valid
actions for the current state:
110
𝒔 𝒊 ← 𝒔 𝒋
𝒂𝒄𝒕𝒊𝒐𝒏 𝒔 𝒗𝒂𝒍𝒊𝒅 (𝒔 𝒊 )
← 𝒂𝒄𝒕𝒊𝒐𝒏 𝒔 𝒗𝒂𝒍𝒊𝒅 (𝒔 𝒋 )
5. End Do
Step 5. End for
In each training session (i.e., each round of the outer loop in presented exploration procedure), our
algorithm explores the user reaction (represented by the 𝑹 𝒂𝒄𝒕𝒊𝒐𝒏 matrix) until it reaches the
stopping point (i.e., 𝜺 ≤ 𝚬 ). After each training session, the brain of the automation (represented
by the 𝑸 matrix) is enhanced. Accordingly, more training sessions results in a more enhanced brain
(i.e., more optimized matrix 𝑸 ).
When training ends, our algorithm uses the 𝑸 matrix (learnt through exploration procedure) to
trace the sequence of actions with the highest reward values recorded in matrix 𝑸 , from the initial
current state. The following procedure summarizes how our algorithm uses the 𝑸 matrix to make
decision for automation level assignment (exploitation procedure):
Step 1. Set initial automation level (𝒔 𝟎 ) as current state (𝒔 𝒊 )
Step 2. From the 𝑸 matrix, find the action with the highest 𝑸 value (i.e., 𝑸 (𝒔 𝒊 ,𝒂 𝒋 )) and using this
action consider going to the next state (𝒔 𝒋 )
Step 3. Until the selected action in Step 2 is not “remaining in the current state” (i.e., 𝒔 𝒊 =𝒔 𝒋 ):
• Set the next state as current state (𝒔 𝒊 ← 𝒔 𝒋 )
• Repeat Step 2
9.2. Evaluation of Framework and Results
Data Used for Evaluation
To evaluate the proposed framework, we used both real and synthetic data. As the real data, we
used part of the sensing data (hereinafter referred to as 𝒔𝒆𝒏𝒔𝒊𝒏𝒈 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 ) we collected in
our experimental study in [45] from an office testbed with five occupants for two weeks, and an
apartment testbed with one occupant for one month (we used two weeks of the collected data for
our validation in [45]). The schematic layouts of the testbeds are depicted in Figure 9-3.
111
Figure 9-3. Schematic layouts of the testbeds
The testbeds were equipped with our sensing system, which included a set of plug meters to
measure the power and energy consumption of appliances, light sensors to capture ambient light
intensity, and motion sensors. During the experiment, the occupants were asked to record their
performed activities along with activity start and finish times (hereinafter referred to as activity
ground truth for real dataset (𝒂𝒄𝒕𝒊𝒗𝒊𝒕𝒚 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 )) and also their preference for controlling
the related appliances (hereinafter referred to as preference ground truth for real dataset for
(𝒑𝒓𝒆𝒇𝒆𝒓𝒆𝒏𝒄𝒆 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 )) using and online platform for data logging.
In addition to real data, we simulated synthetic time-series for activities, using Long Short-Term
Memory network (usually called LSTM), which is a special kind of Recurrent Neural Network
(RNN), capable of learning long-term dependencies. To train our LSTM models, we used the
activity ground truth data for 𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 . We converted the activity ground truth data to time
series with frequency of 1 minute (𝒄𝒐𝒏𝒗𝒆𝒓𝒕𝒆𝒅 _𝒂𝒄𝒕𝒊𝒗𝒊𝒕𝒚 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 ). Figure 9-4 shows a
sample of this conversion. The time series in 𝒂𝒄𝒕𝒊 𝒗𝒊𝒕𝒚 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 carried information on
activities only when they were started or terminated, whereas, the time series in
𝒄𝒐𝒏𝒗𝒆𝒓𝒕𝒆𝒅 _𝒂𝒄𝒕𝒊𝒗𝒊𝒕𝒚 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 carried information on activities that were being performed
at each minute. Using the sequence of activities in a time window of 10 minutes as input, our
trained LSTM models (with average 10-fold cross validation accuracy of about 91% for the office
testbed and 89% for the apartment test bed) predicted the sequence of activities in the consecutive
time window of 5 minutes (i.e., output). Accordingly, starting from the last 10 records (for the last
112
10 minutes) in 𝒄𝒐𝒏𝒗𝒆𝒓𝒕𝒆𝒅 _𝒂𝒄𝒕𝒊𝒗𝒊𝒕𝒚 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 , we extended the real data we had by
simulating the activities for each 5-minute time window of the simulated data, one at a time, until
we got two months of activity data in total (considering both real and simulated data) for each
testbed. Following the simulation, we converted the simulated activity time series back into the
ground truth format (hereinafter referred to as 𝒂𝒄𝒕𝒊𝒗𝒊𝒕𝒚 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 +𝒔𝒚𝒏𝒕𝒉𝒆𝒕𝒊𝒄 ), so that they
would carry information on activities only when they are started or terminated
Figure 9-4. A sample time series in ground truth format and its equivalent time series for simulation
In order to ensure there is no contradiction (e.g., activity of watching television initiates when there
is no one in the apartment) in the simulated data-sets, we manually went through the sequential
instances of the simulated data to make sure their flow makes sense as part of daily routines of the
occupants. Following that, we checked the consistency of the activity knowledgebase we
constructed using the ontologies presented in [45] and the replayed simulated time series (more
information on consistency check of activity knowledgebase can be found in [45]). The data we
previously acquired in our real-world experiment (i.e., 𝒂𝒄𝒕𝒊𝒗𝒊𝒕𝒚 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 – two-week-long
for the office and four-week-long for the apartment) together with the simulated data (i.e., six-
week-long for the office and four-week-long for the apartment) gave us two-month-long periods
of activity data for both testbeds (hereinafter referred to as 𝒂𝒄𝒕𝒊𝒗𝒊𝒕𝒚 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 +𝒔𝒚𝒏𝒕𝒉𝒆𝒕𝒊𝒄 ).
Although we had 2 weeks of real data for the office testbed, since there were smaller number of
activity types in the office testbed (compared to the number of activity types in the apartment
testbed) we could still achieve a high-performing simulation model for the office testbed. Table 9-
1 shows more details on 𝒂𝒄𝒕𝒊𝒗𝒊𝒕𝒚 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 +𝒔𝒚𝒏𝒕𝒉𝒆𝒕𝒊𝒄 .
113
Table 9-1. Details of the achieved activity datasets for two-month-long periods
In order to extend 𝒑𝒓𝒆𝒇𝒆𝒓𝒆𝒏𝒄𝒆 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 to include user preferences for the simulated
activities, we manually went through the participants’ preferences (in
𝒑𝒓𝒆𝒇𝒆𝒓𝒆𝒏𝒄𝒆 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 ) for their performed activities (in 𝒂𝒄𝒕𝒊𝒗𝒊𝒕𝒚 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 ), in order
to identify patterns of their preferences per condition and accordingly guess the preferences for the
synthetic activities in 𝒂𝒄𝒕𝒊𝒗𝒊𝒕𝒚 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 +𝒔𝒚𝒏𝒕𝒉𝒆𝒕𝒊𝒄 . In addition, we interviewed the
participants regarding their automation preferences in different conditions, which we observed in
the synthetic part of 𝒂𝒄𝒕𝒊𝒗𝒊𝒕𝒚 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 +𝒔𝒚𝒏𝒕𝒉𝒆𝒕𝒊𝒄 . In synthetic conditions where participants’
responses for their preferences matched with our guesses, we added the preferences to the extended
version of 𝒑𝒓𝒆𝒇𝒆𝒓𝒆𝒏𝒄𝒆 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 (hereinafter referred to as
𝒑𝒓𝒆𝒇𝒆𝒓𝒆𝒏𝒄𝒆 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 +𝒔𝒚𝒏𝒆𝒕𝒉𝒕𝒊𝒄 ). In conditions where participants’ responses did not match
with our guesses (i.e., about 10% of cases), we added the preferences to
𝒑𝒓𝒆𝒇𝒆𝒓𝒆𝒏𝒄𝒆 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 +𝒔𝒚𝒏𝒆𝒕𝒉𝒕𝒊𝒄 based on the participants’ interviews.
To test the algorithms explained in Section 9.1 (i.e., dynamic command planning, adaptive local
learning, and iterative global learning), we carried out three evaluations using the explained
datasets (which are also summarized in Table 9-2), as follows.
114
Table 9-2. Datasets used for evaluating our algorithms
Evaluation of Dynamic Command Planning
To evaluate the performance of our command planning algorithm on streaming data, we played
back the time series in 𝒔𝒆𝒏𝒔𝒊𝒏𝒈 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 at the same rate and in the same order in which it
was recorded. In our evaluation, the automation level in each context was assigned according to
the most likely preferred automation level that we found in [57] for that context (i.e., for
rescheduling an activity, inquisitive automation was the most likely preferred option, for managing
appliance standby power, adaptive automation was the most likely preferred option and for turning
off the unneeded left on appliances and lights it was full automation. In contexts where inquisitive
automation was assigned, we generated user responses according to the records in
𝒑𝒓𝒆𝒇𝒆𝒓𝒆𝒏𝒄𝒆 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 . Since in this part of our evaluation, we only aimed at evaluating the
performance of our command planning algorithm, in contexts where adaptive automation was
assigned, we assumed the predicted output of the adaptive automation model was inquisitive
automation regardless of the input. Evaluation of adaptive automation was carried out in part 2 of
our evaluation. According to the activity ground truth (i.e., 𝒔𝒆𝒏𝒔𝒊𝒏𝒈 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 ) and the
automation levels that we assigned for different contexts, we generated the ground truth for control
commands in planning points. For example, assuming that at given time t the occupant leaves the
apartment without turning off the lights (based on the activity ground truth) and the automation
115
level assignment for controlling the lights is full automation, the output of command planning
algorithm must be turn of the lights. Accordingly, turning off the lights at time t must be recorded
as ground truth for control commands.
For our evaluation, replaying the time series in 𝒔𝒆𝒏𝒔𝒊𝒏𝒈 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 as streaming data, every
30 second, the automation called the activity recognition algorithm to detect the occurring
activities in a 60-second window of time. In case an activity was terminated or started in that time
window, the automation called the command planning algorithm to plan the control commands
based on the terminated/started activity, which was received by command planning algorithm from
activity recognition algorithm as input. Table 9-3 shows samples of output from command
planning algorithm. Comparing the planning results for the entire dataset with the ground truth for
control commands, we realized that the occurred errors were related to the activity recognition
algorithm, which had an average accuracy of 96.8% for detecting actions and 97.6% for detecting
activities, and not the command planning algorithm.
Table 9-3. Sample output from planning algorithm
Since we used 𝒂𝒄𝒕𝒊𝒗𝒊𝒕𝒚 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 +𝒔𝒚𝒏𝒕𝒉𝒆𝒕𝒊𝒄 for our evaluation in Section 9.2.3, we ran our
command planning algorithm (disjoint from our algorithm for activity recognition) using the time
series of activities in 𝒂𝒄𝒕𝒊𝒗𝒊𝒕𝒚 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 +𝒔𝒚𝒏𝒕𝒉𝒆𝒕𝒊𝒄 as well. Since 𝒂𝒄𝒕𝒊𝒗𝒊𝒕𝒚 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒔𝒚𝒏𝒕𝒉𝒆𝒕 𝒊 𝒄
included the activity time series and not the sensing data, instead of first calling activity recognition
algorithm to detect activities, our command planning algorithm directly carried out planning on
the replayed activity data from 𝒂𝒄𝒕𝒊𝒗𝒊𝒕𝒚 _𝒅 𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 +𝒔𝒚𝒏𝒕𝒉𝒆𝒕𝒊𝒄 . As we did before, we assigned
preferences for automation levels according to the most likely preferred automation level that we
found in [57] for different contexts. To test our algorithm for dynamically changing the task
116
network to incorporate the occurred variations in the automation preferences, we assumed changes
in automation preferences of the occupants (i.e., moving from an assigned automation level to
another) that occurred in the beginning of random days. The changes that we assumed were to
increase the automation level (i.e., no automation to inquisitive automation, inquisitive automation
to adaptive automation, and adaptive automation to full automation), or to decrease the automation
level (i.e., full automation to adaptive automation, adaptive automation to inquisitive automation,
and inquisitive to no automation). We extended and modified our manually generated ground truth
data for control commands in planning points (i.e., in the beginning or at the end of the recognized
activities) to include the simulated data and the assumed changes in automation preferences. Since
the command planning was carried out using the performed activities data and not sensing data
(which is identical to using an error-free activity recognition), the results from our planning
algorithm matched with the ground truth data for control commands.
Evaluation of Adaptive Local Learning
To evaluate our adaptive algorithm, we used the activity data in 𝒂𝒄𝒕𝒊𝒗𝒊𝒕𝒚 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 +𝒔𝒚𝒏𝒕𝒉𝒆𝒕𝒊𝒄
(i.e., combination of both real and synthetic activity data) and the preference data in
𝒑𝒓𝒆𝒇𝒆𝒓𝒆𝒏𝒄𝒆 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 +𝒔𝒚𝒏𝒆𝒕𝒉𝒕𝒊𝒄 (i.e., combination of both real and synthetic preference
data). In this part, we assigned adaptive automation level to all contexts. For each user, we
randomly took 7 days out of the two-month-long data in 𝒂𝒄𝒕𝒊𝒗𝒊𝒕𝒚 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒔𝒚𝒏𝒕𝒉𝒆𝒕𝒊𝒄 , as our test
set, and used the remaining days (i.e., 49 days) for training (we assumed the remaining days were
occurring consecutively in a random order). Replaying the activity time series of training set, we
ran our command planning algorithm. In planning points, since the adaptive automation level was
assigned to all contexts, planning algorithm called adaptive algorithm to get user’s preference for
the current conditions. Along this line, as explained in section 9.1.2, the adaptive algorithm first
initialized the model. At the beginning of learning (i.e., before reaching a prediction accuracy of
85%) when adaptive algorithm was called by the command planner, the adaptive algorithm asked
for user preference (i.e., approval or denial) for executing the command. We generated the users’
responses to these requests according to the records in 𝒑𝒓𝒆𝒇𝒆𝒓𝒆𝒏𝒄𝒆 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 +𝒔𝒚𝒏𝒆𝒕𝒉𝒕𝒊𝒄 . In
case of receiving approval from the user, the adaptive algorithm sent the output of full automation
to the planning algorithm and in case of receiving a denial, the output of no automation was sent
to the planning algorithm. Based on the received preferences from adaptive automation, the
117
planning algorithm carried out the planning at planning points.
Figure 9-5. Test accuracy of the models for each participant through the days of training using
stochastic gradient decent (SGD) with log loss (logistic regression) and hinge loss (SVM classifier)
and stochastic gradient boosting.
In the background, user’s responses in each day were used by adaptive algorithm to update the
model (in the beginning of next day). Following the daily updates, our algorithm used the updated
model to make prediction for the test set. When the test accuracy reached to 85% (a less
conservative approach it to use training accuracy), rather than asking for user preferences the
adaptive algorithm sent the outputs of the learnt model to the command planning algorithm at
planning points. In case the planned commands (which were planned using the received
preferences from the adaptive algorithm) did not match with the ground truth of control commands,
the command planning algorithm notified the adaptive algorithm so that the adaptive algorithm
could use the overridden instances for the next round of updating.
Figure 9-5 shows how the test accuracy of the models for each participant changed during the days
118
of training, using stochastic gradient decent (logistic regression (log loss) and SVM classifier
(hinge loss)) and stochastic gradient boosting (coupled classification trees). As Figure 9-5
suggests, stochastic gradient boosting outperforms stochastic gradient decent for all participants.
Our results indicate that after a few days (i.e., 8 days for occupant A, 6 days for occupant B, 7 days
for occupant C, 8 days for occupant D, 10 days for occupant E, and 12 days for occupant 1), the
accuracy of predicting user’s preference reaches to an acceptable value (i.e., above 85%) via the
grading boosting method. In other words, the learning rate of the model after this point does not
change significantly. This point was then used as the time at which the automation actually
executes the automation commands based on the learnt model. As shown in Figure 9-5, the
ultimate test accuracies of the learnt models were 93.9% for occupant A, 94.6% for occupant B,
91.8% for occupant C, 92.9% for occupant D, 89.0% for occupant E, and 88.5% for occupant 1.
The model for occupant 1 (the apartment testbed) has the lowest ultimate test accuracy. This
observation was expected, as there are more uncertainty and variation with activities performed in
an apartment. In other words, the higher the level of variation in activities and preferences of a
user, the longer it takes to learn a model for preference prediction.
Evaluation of Iterative Global Learning
In this part of our evaluation, we used 𝒂𝒄𝒕𝒊𝒗𝒊𝒕𝒚 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 and 𝒑𝒓𝒆𝒇𝒆𝒓𝒆𝒏𝒄𝒆 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 .
As explained in section 9.1.3., our iterative global algorithm first set 𝒘 𝑼𝑺
= 𝟎 .𝟕 ,𝒘 𝑨𝑩
=
𝟎 .𝟑 ,𝜶 𝟏 = 𝟑 ,𝜶 𝟐 = 𝟐 ,𝜷 = −𝟑 , = 𝟎 .𝟖 ,𝑻 𝑹 = 𝟏𝟒 and 𝚬 = 𝟎 .𝟎 and initialized matrix 𝑸 , 𝑹 𝒑𝒐𝒍𝒊𝒄𝒚
and 𝑹 𝒂𝒄𝒕𝒊𝒐𝒏 (i.e., Steps 1,2, and 3 in presented procedure in section 9.1.3.). As the values we set to
𝒘 𝑼𝑺
and 𝒘 𝑨𝑩
suggest, we put more emphasize on user satisfaction than energy saving. We
repeated the outer loop (with different initial state (𝒔 𝟎 )) until all elements in the matrix 𝑸 were
learnt and converged by our algorithm. The model that we used for adaptive automation was built
using the preference data (in 𝒑𝒓𝒆𝒇𝒆𝒓𝒆𝒏𝒄𝒆 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒔𝒚𝒏𝒆𝒕𝒉𝒕𝒊𝒄 ) associated with the other weeks
(i.e., starting from week 3). During the training sessions, we used the user preference data in
𝒑𝒓𝒆𝒇𝒆𝒓𝒆𝒏𝒄𝒆 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 as user responses to inquisitive automation. Also, to identify
overridden functions in adaptive and full automation, we used the preference data in
𝒑𝒓𝒆𝒇𝒆𝒓𝒆𝒏𝒄𝒆 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 . To do so, in panning points where the planned commands were not
against the user preference in 𝒑𝒓𝒆𝒇𝒆𝒓𝒆𝒏𝒄𝒆 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 , we assumed the executed commands
were overridden by user. For example, in a given planning point, although user’s preference is no
119
automation for a certain context, the adaptive automation wrongly predicts user’s preference as
full automation and hence, the planned commands by planning algorithm is to turn off the related
appliance, which in turn results in an override by user.
In each training session (with a given initial state 𝒔 𝟎 ) our iterative global learning algorithm
determined the valid actions and repeated the inner loop of learning the 𝑸 matrix until the stopping
point was reached (i.e., 𝜺 = 𝚬 = 𝟎 .𝟎 ). In each iteration of the inner loop with the current state of
𝒔 𝒊 , our algorithm calculated the daily policy rewards (𝒓 𝒑𝒐𝒍𝒊𝒄𝒚 ) for each day during the 𝑻 𝑹 (i.e.,
days in 𝒂𝒄𝒕𝒊𝒗 𝒊 𝒕𝒚 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 , {𝒅𝒂𝒚 𝟎 ,𝒅𝒂𝒚 𝟏 ,…,𝒅𝒂𝒚 𝑻 𝑹 −𝟏 }). In the next iteration of the inner
loop (i.e., going to next state 𝒔 𝒋 ), our algorithm used the data in 𝒂𝒄𝒕𝒊𝒗𝒊𝒕𝒚 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 again, by
going back to 𝒅𝒂𝒚 𝟎 and calculate the daily policy rewards for the new state. Figure 9-6 shows an
example variation of daily policy reward for different states during the 𝑻 𝑹 for one of the
participants. Based on Figure 9-6, the reward of the achieved benefit in policy reward calculation
is highest in full automation, as expected. On the other hand, user satisfaction reward is highest in
adaptive automation. Considering the sum of the rewards from achieved benefit and user
satisfaction (i.e., total policy reward) adaptive automation has the highest and inquisitive
automation has the second highest reward values. Full automation and no automation come in third
and last spots, respectively.
In each round of the inner loop, our algorithm mapped the average policy reward to action reward
(𝒓 𝒂𝒄𝒕𝒊𝒐𝒏 ) and accordingly updated the 𝑸 matrix. When all elements in the matrix 𝑸 were learnt and
converged, our algorithm normalized the achieved 𝑸 matrix for each participant. Figure 9-7 shows
the normalized 𝑸 matrices. The 𝑸 matrices in Figure 9-7, can be used by automation to trace the
sequence of the actions with the highest reward values, from an initial state. For example, let us
assume the initial state (assigned automation level) is no automation for occupant B. As last row
of the matrix 𝑸 suggests, there are two valid actions for this state: (1) going to inquisitive
automation level with 𝑸 value of 100, and (2) staying in no automation level with 𝑸 value of 77.71.
Since our algorithm selects the action with the highest 𝑸 value among the valid actions, it selects
the action of going to inquisitive automation. If user approves it, the automation changes the
assigned automation level to inquisitive. The valid actions for inquisitive automation (third row of
the 𝑸 matrix) are: (1) going to adaptive automation level with 𝑸 value of 23.98, (2) staying in
inquisitive automation with 𝑸 value of 22.29, and (3) going to no automation level with 𝑸 value
120
of 0.00. Our algorithm selects to go to adaptive automation as it has the highest 𝑸 value.
Accordingly, if user approves it, the automation changes the assigned automation level to adaptive.
There are three valid actions for the adaptive automation level (second row of the 𝑸 matrix): (1)
going to full automation level with 𝑸 value of 9.89, (2) staying in adaptive automation level with
𝑸 value of 15.54, and (3) going to inquisitive automation level with 𝑸 value of 13.86. Based on
the 𝑸 values, the best action selected by our algorithm is to stay in adaptive automation.
Figure 9-6. Example variation of daily policy reward for different states (Occupant B)
121
Figure 9-7. Normalized converged Q matrices for each participant after 4 training sessions
(𝑵 = 𝑵𝒐 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 ,𝑰 = 𝑰𝒏𝒒𝒖𝒊𝒔𝒊𝒕𝒊𝒗𝒆 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 ,𝑨 = 𝑨𝒅𝒂𝒑𝒕𝒊𝒗𝒆 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 ,𝑭 = 𝑭𝒖𝒍𝒍 𝒂𝒖𝒕𝒐𝒎𝒂𝒕𝒊𝒐𝒏 )
9.3. Discussion
Our results for testing the dynamic command planning showed that the occurred errors were
related to the activity recognition algorithm, so in order to improve the performance of the
command planning, we need to improve the activity recognition. Our algorithm can successfully
consider user’s automation level preferences as part of the planning and update its task network
based on the changes in automation level assignments. This capability is particularly useful for
iterative global learning algorithm where in time assigned automation levels might change per
user’s approval.
Figure 9-8 shows the average daily energy savings that could be achieved via using full
122
automation, adaptive automation and inquisitive automation (the potential energy saving in no
automation is zero). The energy savings were estimated based on the energy consumption data we
acquired during the first week of our experiment in [45]. Since in full automation level, the
automation turns off the appliances (regardless of user preferences) whenever a wasted energy
consumption is detected, the average daily energy saving in full automation is equal to the average
daily waste consumption we observed for each occupant in [45]. To estimate the energy savings
that could be achieved via using adaptive automation, we used the days from beginning of week 2
through end of week 8 in 𝒑𝒓𝒆𝒇𝒆𝒓𝒆𝒏𝒄𝒆 _𝒅𝒂𝒕𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 +𝒔𝒚𝒏𝒆𝒕𝒉𝒕𝒊𝒄 to train our adaptive automation
models. Next, the trained models were used by adaptive algorithm to predict user preferences
during the first week of the data (for which we had energy consumption data obtained in our
experiment in [45]). The predicted preferences were used by command planning algorithm to plan
the control commands. It should be pointed out here that the prediction accuracy of the trained
models were above 90% for all participants (we also observed this in our previous analysis
depicted in Figure 9-5). For potential energy saving estimation, in cases where the models made
incorrect predictions, we assumed the associated control commands (which were incorrectly
planned due to the error in preference prediction) were not overridden by the users. Accordingly,
based on these commands, we estimated the average daily energy saving that could be achieved
for each participant via adaptive automation. To estimate the energy savings that could be achieved
via using inquisitive automation, we used the first week of the preference data in
𝒑𝒓𝒆𝒇𝒆𝒓𝒆𝒏𝒄𝒆 _𝒅𝒂 𝒕 𝒂𝒔𝒆𝒕 𝒓𝒆𝒂𝒍 as user responses to inquisitive automation. Based on these responses,
the command planning algorithm planned the control commands. Based on these commands, we
estimated the average daily energy saving that could be achieved for each participant via
inquisitive automation.
123
Figure 9-8. The average daily energy consumptions and average daily energy savings that could
be achieved for each participant via adaptive automation
As Figure 9-8 shows, the average daily energy savings for full automation are bout 24%, 72%,
59%, 57%, 75%, and 29% of the total daily energy consumption, for occupant A, B, C, D, E and
1, respectively. About 10%, 45%, 10%, 5%, 11%, and 20% of total daily energy consumption are
saved in adaptive automation, for occupant A, B, C, D, E and 1, respectively. Finally, the average
daily energy savings in inquisitive automation are about 11%, 49%, 9%, 6%, 10%, and 18% of the
total daily energy consumption, for occupant A, B, C, D, E and 1, respectively. The significant
difference between energy savings of full automation and adaptive automation or inquisitive
automation prove that preferences are mainly not in line with the energy saving goals. As we
checked our data, the reason behind the lower savings of the adaptive automation and inquisitive
automation in occupant C, D and E is that their computers were always ON and they didn’t want
them to be switched to standby mode even when they left the office. Energy savings in adaptive
124
automation and inquisitive automation are quite close to each other and the small differences are
due to the incorrect predictions of user preferences made by adaptive automation. As Figure 9-8
shows, for occupant A,B and D, inquisitive automation has slightly higher energy saving compared
to adaptive automation. This means that the majority of the incorrect predictions made by the
trained models for these occupants were false negatives (i.e., predicting no automation while the
true preference is full automation). On the other hand, for occupant C,E and 1, adaptive automation
has slightly higher energy saving compared to inquisitive automation. This implies that the
majority of the incorrect predictions made by the trained models for these occupants were false
positives (i.e., predicting full automation while the true preference is no automation).
The achieved benefits from the high energy savings in full automation do not compensate for its
negative effects on user satisfaction. This is why in the matrices we learn in our evaluations for
iterative global learning (Figure 9-7), full automation is indicated as an unstable state that needs
to be changed to adaptive automation for all occupants. Indeed, these observations are also related
to how we selected the weights for reward calculation (i.e., 0.7 for user satisfaction and 0.3 for
energy saving). Considering our problem configuration for reward calculation (i.e., selected
weights), adaptive automation and inquisitive automation outperform full automation, as they
result in achieving higher amounts of total reward. As stated before, in terms of energy saving,
inquisitive automation and adaptive automation have almost similar performance. Therefore, the
reason for adaptive automation being superior to inquisitive automation is the higher user
satisfaction that is achieved in adaptive automation (due to the lower level of user participation
required for adaptive automation compared to inquisitive automation). This is again a
configuration that could be adjusted (𝜶 𝟏 and 𝜶 𝟐 ) depending on how disturbing is for user to
permanently participate in automation. As several studies also suggest (e.g., [16,29–31]), user
preferences play a significant role in adoption of automation. Thus, by giving more importance to
user satisfaction, we reduce the chance of abandoning the automation by user, which in turn
potentially results in achieving more benefits (e.g., energy saving) in long term.
9.4. Conclusion
In this chapter, we introduced an activity-driven and user-centered building automation approach
to improve the energy efficiency of appliances and lighting systems in buildings considering
occupants’ preferences and their dynamics. The proposed approach, in this chapter, was
125
continuation of our earlier work in which we developed an online activity recognition technique
that could be integrated with the automation module. Our proposed automation fully or partially
controls the service systems in buildings based on a set of dynamic rules that are generated with
the insight about user’s preferences and activities. The algorithmic components of our proposed
automation include (1) dynamic command planning, (2) adaptive local learning, and (3) iterative
global learning. In order to evaluate these algorithms, we used a combination of real and synthetic
user activity and preference data from an office with five occupants and an apartment with one
occupant. Based on our results from evaluation of adaptive local learning, after a certain number
of days (i.e., 8.5 days in average) the accuracy of predicting participants’ preference reached to an
acceptable value (i.e., above 85%). About 24% to 75%, 5% to 45%, and 6% to 49% of the total
daily energy consumption of the participants could be saved using full automation, adaptive
automation and inquisitive automation, respectively. Our results for evaluating iterative global
learning algorithm indicated that adaptive automation has the highest sum of the rewards from
achieved benefit and user satisfaction and inquisitive automation has the second highest reward
values. Full automation and no automation came in third and last spots, respectively.
126
Chapter 10. Limitations and Future work
There are limitations associated with the presented thesis that require further investigations. The
first set of limitations is related to our activity recognition frameworks. The types of activities
included in our experiments was not inclusive of all activities. In our future work, by adding more
types of sensors (e.g., acoustic sensors and wearable sensors), we will investigate the performance
of our framework in recognizing other types of activities, such as working out or talking on phone,
or group activities, such as delivering a presentation. Next, our experiments did not include multi-
occupancy residential testbeds. Therefore, we will extend our investigations to include multi-
occupancy residential testbeds with a particular focus of differentiating among occupants in our
future studies. Furthermore, the sequential order of performed actions (e.g., first sitting on the chair
and then turning on the monitor vs. first turning on the monitor and then sitting on the chair) and
also inter-dependencies of activities (e.g., inter-dependency between activity of preparing food and
activity of eating food) were not considered for activity recognition in this thesis. In our future
implementations, we will investigate whether these factors would affect the recognition of
activities and how to model them in our approach. Another limitation of our approach is the
required effort for initializing the system (i.e., specifying actions and associated sensors and also
constructing a knowledgebase). To eliminate the stated initialization effort to support large scale
deployment of our proposed approach, we will investigate the use of existing automated metadata
construction techniques to identify/map sensors to actions (e.g., [143,144]). There are existing
ontologies to describe smart buildings (i.e., in domotics and other pervasive computing domains).
Our implemented ontology in our framework could be replaced by these ontologies, in our future
studies.
The second set of limitations is related to our building automation framework. The proposed
framework, in this thesis, was evaluated in a simulation using a combination of real and synthetic
data. Although evaluating our framework in a simulation is a limitation as we could not explore
the effects from experiencing the actual execution of automation commands, the achieved results
illustrate the potential and feasibility of the presented framework. More importantly, the presented
study indicated the capability of our algorithms in learning the dynamics of user preferences under
the controlled conditions. In order to further investigate the applicability of the framework and
find answers to our unanswered questions, we will implement the proposed automation algorithms
127
in real testbeds over a longer period of time, as part of our future work. Below are some of the
remaining questions that need to be answered in our future work.
In our evaluation of the command planning algorithm, we tested how our algorithm updated its
task network based on the changes in automation level assignments. Another interesting thing to
explore is the variation in activities/appliances. In our implementation of the algorithm, we realized
these types of changes require more justifications in the activity knowledgebase, the task network
and other planning components (e.g., methods). Therefore, we excluded them from this thesis.
Apart from the fact that a procedure for notifying the automation by user regarding these changes
is essential, there is a need for an automated approach to modify the activity knowledgebase and
planning components based on the received information from the user on activities and appliances.
In this thesis, we did not consider group activities, such as having a meeting or giving a
presentation. Due to the fact that in group activities, appliances are shared among users, conflicting
automation preferences might happen. To overcome this challenge, in our future work, we will
investigate the effect of conflicting preferences by considering an additional criterion for
preference interactions in our reward calculation of iterative global learning algorithm. In this
study we only considered overridden functions (in adaptive local learning and iterative global
learning algorithms) that occur due to inappropriate automation level assignment. In our future
work we will investigate other causes of overriding automation functions such as slow
performance, errors in activity recognition due to an inadequate action detection model, and other
user-related changes (e.g., adding a new device or change in schedule).
128
Chapter 11. Conclusions
Research work of this thesis aims at improving energy efficiency of appliances and lighting
systems in buildings by providing occupants with activity-based electricity consumption feedback
and also an intelligent adaptive automation that controls the operation of appliances and lighting
systems based on user activities and preferences. In line with this objective, the first contribution
of this thesis is a framework to allocate personalized appliance-level disaggregated electricity
consumption to daily activities, using offline activity recognition. In our proposed frameworks, in
order to separate the overlapping activities, we introduced an ontology-based approach, based on
which the input data (i.e., electricity usage) is separated into categories with regards to the context
information. Then the separated datasets are segmented into active and inactive segments. Next,
the active segments are mapped into activities. Finally, the associated electricity consumption of
detected activities is estimated. In order to evaluate our framework, an experimental validation in
three single occupancy apartment units was carried out. The experimental results showed a total
F-measure value of 0.97 for segmentation and an average accuracy of 93.41% for activity
recognition. The presented framework provides occupants with appliance-level activity-based
electricity consumption feedback, which in turn helps them to understand how their activities
affect building’s energy consumption.
With the purpose of recognizing activities in real-time to provide activity-driven automation in
buildings, the second contribution of this thesis is an unsupervised framework for real-time activity
recognition and wasted electricity cost and energy consumption detection, using a combination of
inductive and deductive reasoning. Our proposed framework consists of three sub-algorithms:
action detection, activity recognition and waste estimation. As the real-time input, the action
detection algorithm receives the data from the sensing system to detect the occurred actions using
unsupervised machine learning algorithms to detect actions. Detected actions are then used by the
activity recognition algorithm to recognize the activities through semantic reasoning on our
constructed ontology. For a given appliance, based on the recognized activities and waste
estimation policies that are applicable, the waste estimation algorithm determines whether the
current consumption of the appliance is considered as waste and accordingly it estimates the
potential savings. To evaluate the performance of our framework, three experiments were carried
out, during two weeks, in a testbed office with five occupants and two single-occupancy
129
apartments, in which the performance of the action detection and activity recognition was
evaluated using the ground truth labels for actions and activities. Results showed average accuracy
of 96.8% for action detection and 97.6% for activity recognition. In addition, the results from the
waste estimation showed that an average of 35.5% of the consumption of an appliance or lighting
system in the testbeds could be potentially reduced.
The third contribution of this thesis is to understand how occupant’s automation preferences in for
lighting system and appliances control, varies by subjective factors in different control contexts.
The contexts that we focused in our study include rescheduling an energy consuming activity,
management of different appliance states with regards to occupant activities, and occupancy-based
control of lighting systems. For each context, we defined four levels of automation, including full
automation, inquisitive automation (asks user approval before taking an action), adaptive
automation (learns user’s pattern and acts accordingly) and no automation. We carried out a survey
of 250 responders using Amazon Mechanical Turk to determine how the automation preferences
vary by personality and demographic-related characteristics, including big five personality traits
(i.e., extraversion, agreeableness, conscientiousness, neuroticism and openness to experience) and
age, gender, marital status, education level and income level in residential buildings. Our findings
indicated that automation preferences vary by context, such that for rescheduling an activity,
inquisitive automation is the most likely preferred option, whereas for managing appliance standby
power, adaptive automation is the option with the highest probability of being preferred and for
turning off the unneeded left on appliances and lights, the preference is full automation. Based on
our findings, from demographic-related variables education and income levels’ effect are,
respectively, marginally significant and significant on automation preference. From personality
traits, the effects of agreeableness, neuroticism and openness to experience, are significant. Finally,
our investigation shows that in all contexts no automation is the least preferred option.
The fourth an final contribution of this thesis is an activity-driven and user-centered building
automation framework to improve the energy efficiency of appliances and lighting systems in
buildings considering occupants’ preferences and their dynamics. Our proposed automation fully
or partially controls the service systems in buildings based on a set of dynamic rules that are
generated with the insight about user’s preferences and activities. The algorithmic components of
our proposed automation include (1) dynamic command planning, (2) adaptive local learning, and
130
(3) iterative global learning. In order to evaluate these algorithms, we used a combination of real
and synesthetic user activity and preference data from an office with five occupants and an
apartment with one occupant. Based on our results from evaluation of adaptive local learning, after
a certain number of days (i.e., 8.5 days in average) the accuracy of predicting participants’
preference reached to an acceptable value (i.e., above 85%). About 24% to 75%, 5% to 45%, and
6% to 49% of the total daily energy consumption of the participants could be saved using full
automation, adaptive automation and inquisitive automation, respectively. Our results for
evaluating iterative global learning algorithm indicated that adaptive automation has the highest
sum of the rewards from achieved benefit and user satisfaction and inquisitive automation has the
second highest reward values. Full automation and no automation came in third and last spots,
respectively.
131
Acknowledgments
This material is based upon work supported by the National Science Foundation under Grants No.
1351701, 1231001 and 0930868. I am thankful for the support of the NSF. Any opinions, findings,
and conclusions or recommendations expressed in this material are those of the author(s) and do
not necessarily reflect the views of the National Science Foundation. I am very thankful to my
Ph.D. advisors, Dr. Burcin Becerik-Gerber and Dr. Lucio Soibelman, for their continuous support
and guidance. I am also thankful to iLab members, specially, Mike Castro, Ali Ghahramani and
Farrokh Jazizadeh, who partially supported or contributed to the research presented in this thesis.
I am very thankful to Dr. Chris Mattmann from whom I learnt several Data Science and Machine
learning skills and his course at USC was one of the best courses I took during my academic
studies. Finally, I am very thankful to my family for their support and kind helps all these years.
132
Publications
Peer-Reviewed Journal Publications
1. S. Ahmadi-Karvigh, B. Becerik-Gerber, L. Soibelman, A framework for allocating personalized
appliance-level disaggregated electricity consumption to daily activities, Energy and Buildings
111 (2016) 337-350. (published) – Chapter 6
2. S. Ahmadi-Karvigh, A. Ghahramani, B. Becerik-Gerber, L. Soibelman, Real-time activity
recognition for energy efficiency in buildings, Applied Energy 211 (2018) 146-160. (published) –
Chapter 7
3. S. Ahmadi-Karvigh, A. Ghahramani, B. Becerik-Gerber, L. Soibelman, One size does not fit
all: Understanding user preferences for building automation systems, Energy and Buildings 145
(2017) 163-173. (published) – Chapter 8
4. S. Ahmadi-Karvigh, B. Becerik-Gerber, L. Soibelman, Intelligent adaptive automation: An
activity-driven and user-centered building automation. (expected submission 8/2018) – Chapter 9
5. S. Ahmadi-Karvigh, B. Becerik-Gerber, L. Soibelman, Ten questions concerning building
automation: What makes buildings intelligent?. (expected submission 8/2018)
6. F. Jazizadeh, S. Ahmadi-Karvigh, B. Becerik-Gerber, L. Soibelman, Spatiotemporal lighting
load disaggregation using light intensity signal, Energy and Buildings 69 (2014) 572-583.
(published)
7. A. Ghahramani, S. Ahmadi-Karvigh, B. Becerik-Gerber, HVAC system energy optimization
using an adaptive hybrid metaheuristic. Energy and Buildings 152 (2017) 149-161. (published)
8. A. Ghahramani, G Castro, S. Ahmadi-Karvigh, B. Becerik-Gerber, Towards unsupervised
learning of thermal comfort using infrared thermography, Applied Energy 211 (2018) 41-49.
(published)
133
References
[1] Annual Energy Outlook. US Energy Inf Adm 2015.
[2] Residential Energy Consumption Survey (RECS). US Energy Inf Adm 2009.
[3] Commercial buildings energy consumption survey (CBECS). US Energy Inf Adm 2012.
[4] Nguyen TA, Aiello M. Energy intelligent buildings based on user activity: A survey.
Energy Build 2013;56:244–57.
[5] Farinaccio L, Zmeureanu R. Using a pattern recognition approach to disaggregate the total
electricity consumption in a house into the major end-uses. Energy Build 1999;30:245–59.
[6] Froehlich J, Larson E, Gupta S, Cohn G, Reynolds M, Patel S. Disaggregated end-use
energy sensing for the smart grid. IEEE Pervasive Comput 2011;10:28–39.
[7] Jazizadeh F, Ahmadi-Karvigh S, Becerik-Gerber B, Soibelman L. Spatiotemporal lighting
load disaggregation using light intensity signal. Energy Build 2014;69:572–83.
[8] Page J, Robinson D, Morel N, Scartezzini J-L. A generalised stochastic model for the
simulation of occupant presence. Energy Build 2008;40:83–98.
[9] Shaikh PH, Nor NBM, Nallagownden P, Elamvazuthi I, Ibrahim T. A review on
optimized control systems for building energy and comfort management of smart
sustainable buildings. Renew Sustain Energy Rev 2014;34:409–29.
[10] Kim E, Helal S, Cook D. Human activity recognition and pattern discovery. IEEE
Pervasive Comput 2010;9:48–53.
[11] Bradshaw JM, Feltovich PJ, Jung H, Kulkarni S, Taysom W, Uszok A. Dimensions of
adjustable autonomy and mixed-initiative interaction. Agents Comput. Auton., Springer;
2004, p. 17–39.
[12] Olson WA, Sarter NB. “As long as I’m in control...”: pilot preferences for and experiences
with different approaches to automation management. Hum. Interact. with Complex Syst.
1998. Proceedings., Fourth Annu. Symp., IEEE; n.d., p. 63–72.
[13] Parasuraman R, Hancock PA. Adaptive control of mental workload 2001.
[14] Parasuraman R, Mouloua M, Hilburn B. Adaptive aiding and adaptive task allocation
enhance human-machine interaction. Autom Technol Hum Perform Curr Res Trends
1999:119–23.
[15] Kaber DB, Endsley MR. The effects of level of automation and adaptive automation on
human performance, situation awareness and workload in a dynamic control task. Theor
Issues Ergon Sci 2004;5:113–53.
[16] Truszkowski W, Rouff C, Bailin S, Riley M. Progressive autonomy: a method for
gradually introducing autonomy into space missions. Innov Syst Softw Eng 2005;1:89–99.
[17] John OP, Donahue EM, L KR. The Big Five Inventory--Versions 4a and 54. vol. 37. 1991.
doi:10.1016/S0092-6566(03)00046-1.
134
[18] Pérez-Lombard L, Ortiz J, Pout C. A review on buildings energy consumption
information. Energy Build 2008;40:394–8.
[19] Arens E, Federspiel CC, Wang D, Huizenga C. How ambient intelligence will improve
habitability and energy efficiency in buildings. Ambient Intell., Springer; 2005, p. 63–80.
[20] Fechner J V. Human factors in appliance energy-consumption. Proc. IEEE Appl. Tech.
Conf. Pittsburgh, Pennsylvania, n.d.
[21] Parasuraman R, Sheridan TB, Wickens CD. A model for types and levels of human
interaction with automation. IEEE Trans Syst Man, Cybern A Syst Humans 2000;30:286–
97.
[22] Amayri M, Arora A, Ploix S, Bandhyopadyay S, Ngo Q-D, Badarla VR. Estimating
occupancy in heterogeneous sensor environment. Energy Build 2016;129:46–58.
[23] Richardson I, Thomson M, Infield D, Clifford C. Domestic electricity use: A high-
resolution energy demand model. Energy Build 2010;42:1878–87.
[24] Noor MHM, Salcic Z, Kevin I, Wang K. Enhancing ontological reasoning with
uncertainty handling for activity recognition. Knowledge-Based Syst 2016;114:47–60.
[25] Simin Ahmadi-Karvigh Burcin Becerik-Gerber LS. A framework for allocating
personalized appliance-level disaggregated electricity consumption to daily activities
2015.
[26] Ghahramani A, Tang C, Becerik-Gerber B. An online learning approach for quantifying
personalized thermal comfort via adaptive stochastic modeling. Build Environ
2015;92:86–96.
[27] Vagia M, Transeth AA, Fjerdingen SA. A literature review on the levels of automation
during the years. What are the different taxonomies that have been proposed? Appl Ergon
2016;53:190–202.
[28] Szalma JL, Taylor GS. Individual differences in response to automation: the five factor
model of personality. J Exp Psychol Appl 2011;17:71.
[29] Rotter JB. Interpersonal trust, trustworthiness, and gullibility. Am Psychol 1980;35:1.
[30] Merritt SM, Ilgen DR. Not all trust is created equal: Dispositional and history-based trust
in human-automation interactions. Hum Factors J Hum Factors Ergon Soc 2008;50:194–
210.
[31] Brown SA, Venkatesh V. Model of adoption of technology in households: A baseline
model test and extension incorporating household life cycle. MIS Q 2005:399–426.
[32] Junco R, Merson D, Salter DW. The effect of gender, ethnicity, and income on college
students’ use of communication technologies. Cyberpsychology, Behav Soc Netw
2010;13:619–27.
[33] Zhou T, Lu Y. The effects of personality traits on user acceptance of mobile commerce.
Intl J Human–Computer Interact 2011;27:545–61.
135
[34] Cook DJ, Krishnan NC. Activity Learning: Discovering, Recognizing, and Predicting
Human Behavior from Sensor Data. John Wiley & Sons; 2015.
[35] Simpson J, Weiner ESC. Oxford English dictionary online. Oxford Clarendon Press
Retrieved March 1989;6:2008.
[36] Domingues P, Carreira P, Vieira R, Kastner W. Building automation systems: Concepts
and technology review. Comput Stand Interfaces 2016;45:1–12.
[37] Dorais G, Bonasso RP, Kortenkamp D, Pell B, Schreckenghost D. Adjustable autonomy
for human-centered autonomous systems. Work. notes Sixt. Int. Jt. Conf. Artif. Intell.
Work. Adjust. Auton. Syst., n.d., p. 16–35.
[38] Helal S, Mann W, El-Zabadani H, King J, Kaddoura Y, Jansen E. The gator tech smart
house: A programmable pervasive space. Computer (Long Beach Calif) 2005;38:50–60.
[39] Yoo S, Rho D, Cheon G, Choi J. A central repository for biosignal data. 2008 Int. Conf.
Inf. Technol. Appl. Biomed., IEEE; n.d., p. 275–7.
[40] Ghahramani A, Castro G, Karvigh SA, Becerik-Gerber B. Towards unsupervised learning
of thermal comfort using infrared thermography. Appl Energy 2018;211:41–9.
doi:10.1016/j.apenergy.2017.11.021.
[41] Brdiczka O, Crowley JL, Reignier P. Learning situation models in a smart home. IEEE
Trans Syst Man, Cybern Part B 2009;39:56–63.
[42] Anguita D, Ghio A, Oneto L, Parra X, Reyes-Ortiz JL. Human activity recognition on
smartphones using a multiclass hardware-friendly support vector machine. Int. Work.
Ambient Assist. Living, Springer; 2012, p. 216–23.
[43] Candanedo LM, Feldheim V, Deramaix D. A methodology based on Hidden Markov
Models for occupancy detection and a case study in a low energy residential building.
Energy Build 2017;148:327–41. doi:10.1016/j.enbuild.2017.05.031.
[44] Singh D, Merdivan E, Hanke S, Kropf J, Geist M, Holzinger A. Convolutional and
recurrent neural networks for activity recognition in smart environment. vol. 10344 LNAI.
2017. doi:10.1007/978-3-319-69775-8_12.
[45] Ahmadi-Karvigh S, Ghahramani A, Becerik-Gerber B, Soibelman L. Real-time activity
recognition for energy efficiency in buildings. Appl Energy 2018;211:146–60.
doi:10.1016/j.apenergy.2017.11.055.
[46] Nguyen TA, Raspitzu A, Aiello M. Ontology-based office activity recognition with
applications for energy savings. J Ambient Intell Humaniz Comput 2014;5:667–81.
[47] Harle RK, Hopper A. The potential for location-aware power management. Proc. 10th Int.
Conf. Ubiquitous Comput., ACM; n.d., p. 302–11.
[48] Zou H, Zhou Y, Jiang H, Chien SC, Xie L, Spanos CJ. WinLight: A WiFi-based
occupancy-driven lighting control system for smart building. Energy Build 2018;158:924–
38. doi:10.1016/j.enbuild.2017.09.001.
[49] Davidsson P, Boman M. Distributed monitoring and control of office buildings by
136
embedded agents. Inf Sci (Ny) 2005;171:293–307.
[50] Oldewurtel F, Parisio A, Jones CN, Gyalistras D, Gwerder M, Stauch V, et al. Use of
model predictive control and weather forecasts for energy efficient building climate
control. Energy Build 2012;45:15–27. doi:10.1016/j.enbuild.2011.09.022.
[51] Barbato A, Borsani L, Capone A, Melzi S. Home energy saving through a user profiling
system based on wireless sensors. Proc. first ACM Work. Embed. Sens. Syst. energy-
efficiency Build., ACM; n.d., p. 49–54.
[52] Schmidt M, Åhlund C. Smart buildings as Cyber-Physical Systems: Data-driven
predictive control strategies for energy efficiency. Renew Sustain Energy Rev
2018;90:742–56.
[53] Gopalratnam K, Cook DJ. Online sequential prediction via incremental parsing: The
active lezi algorithm. IEEE Intell Syst 2007;22:52–8.
[54] Zhang S, McClean S, Scotney B, Hong X, Nugent C, Mulvenna M. Decision support for
alzheimer’s patients in smart homes. Comput. Med. Syst. 2008. CBMS’08. 21st IEEE Int.
Symp., IEEE; n.d., p. 236–41.
[55] Georgievski I, Nguyen TA, Nizamic F, Setz B, Lazovik A, Aiello M. Planning meets
activity recognition: Service coordination for intelligent buildings. Pervasive Mob Comput
2017;38:110–39. doi:10.1016/j.pmcj.2017.02.008.
[56] Alam MR, Reaz MBI, Ali MAM. A review of smart homes—past, present, and future.
IEEE Trans Syst Man, Cybern Part C (Applications Rev 2012;42:1190–203.
[57] Ahmadi-Karvigh S, Ghahramani A, Becerik-Gerber B, Soibelman L. One size does not fit
all: Understanding user preferences for building automation systems. Energy Build
2017;145:163–73.
[58] Ghahramani A, Karvigh SA, Becerik-Gerber B. HVAC system energy optimization using
an adaptive hybrid metaheuristic. Energy Build 2017;152:149–61.
doi:10.1016/j.enbuild.2017.07.053.
[59] Kim J, Zhou Y, Schiavon S, Raftery P, Brager G. Personal comfort models: Predicting
individuals’ thermal preference using occupant heating and cooling behavior and machine
learning. Build Environ 2018;129:96–106. doi:10.1016/j.buildenv.2017.12.011.
[60] Park JY, Nagy Z. Comprehensive analysis of the relationship between thermal comfort
and building control research - A data-driven literature review. Renew Sustain Energy
Rev 2018;82:2664–79. doi:10.1016/j.rser.2017.09.102.
[61] Liu J, Zhang W, Chu X, Liu Y. Fuzzy logic controller for energy savings in a smart LED
lighting system considering lighting comfort and daylight. Energy Build 2016;127:95–
104. doi:10.1016/j.enbuild.2016.05.066.
[62] Kwak J, Varakantham P, Maheswaran R, Tambe M, Jazizadeh F, Kavulya G, et al.
SAVES: A sustainable multiagent application to conserve building energy considering
occupants. Proc. 11th Int. Conf. Auton. Agents Multiagent Syst. 1, International
Foundation for Autonomous Agents and Multiagent Systems; n.d., p. 21–8.
137
[63] Alan AT, Ramchurn SD, Rodden T, Costanza E, Fischer J, Jennings NR. Managing
energy tariffs with agents: a field study of a future smart energy system at home. Adjun.
Proc. 2015 ACM Int. Jt. Conf. Pervasive Ubiquitous Comput. Proc. 2015 ACM Int. Symp.
Wearable Comput., ACM; n.d., p. 1551–8.
[64] Zheng H, Wang H, Black N. Human activity detection in smart home environment with
self-adaptive neural networks. Networking, Sens. Control. 2008. ICNSC 2008. IEEE Int.
Conf., IEEE; n.d., p. 1505–10.
[65] Ma T, Kim Y-D, Ma Q, Tang M, Zhou W. Context-aware implementation based on CBR
for smart home. WiMob’2005), IEEE Int. Conf. Wirel. Mob. Comput. Netw. Commun.
2005., vol. 4, IEEE; n.d., p. 112–5.
[66] Rashidi P, Cook DJ. Keeping the intelligent environment resident in the loop. Intell.
Environ. 2008 IET 4th Int. Conf., IET; n.d., p. 1–9.
[67] Avci A, Bosch S, Marin-Perianu M, Marin-Perianu R, Havinga P. Activity recognition
using inertial sensing for healthcare, wellbeing and sports applications: A survey. Archit.
Comput. Syst. (ARCS), 2010 23rd Int. Conf., VDE; n.d., p. 1–10.
[68] Serna A, Pigot H, Rialle V. Modeling the progression of Alzheimer’s disease for cognitive
assistance in smart homes. User Model User-Adapt Interact 2007;17:415–38.
[69] Mubashir M, Shao L, Seed L. A survey on fall detection: Principles and approaches.
Neurocomputing 2013;100:144–52.
[70] Abreu JM, Pereira FC, Ferrão P. Using pattern recognition to identify habitual behavior in
residential electricity consumption. Energy Build 2012;49:479–87.
[71] Virote J, Neves-Silva R. Stochastic models for building energy prediction based on
occupant behavior assessment. Energy Build 2012;53:183–93.
[72] Chen C, Cook DJ, Crandall AS. The user side of sustainability: Modeling behavior and
energy usage in the home. Pervasive Mob Comput 2013;9:161–75.
[73] Thomas BL, Cook DJ. CARL: Activity-aware automation for energy efficiency. Proc.
2014 ACM Int. Jt. Conf. Pervasive Ubiquitous Comput. Adjun. Publ., ACM; 2014, p.
939–46.
[74] Lee S, Ryu G, Chon Y, Ha R, Cha H. Automatic standby power management using usage
profiling and prediction. IEEE Trans Human-Machine Syst 2013;43:535–46.
[75] Milenkovic M, Amft O. An opportunistic activity-sensing approach to save energy in
office buildings. Proc. fourth Int. Conf. Futur. energy Syst., ACM; n.d., p. 247–58.
[76] Conte G, De Marchi M, Nacci AA, Rana V, Sciuto D. BlueSentinel: a first approach using
iBeacon for an energy efficient occupancy detection system. BuildSys@ SenSys, n.d., p.
11–9.
[77] Khan A, Nicholson J, Mellor S, Jackson D, Ladha K, Ladha C, et al. Occupancy
monitoring using environmental & context sensors and a hierarchical analysis framework.
BuildSys@ SenSys, n.d., p. 90–9.
138
[78] Rodríguez ND, Cuéllar MP, Lilius J, Calvo-Flores MD. A survey on ontologies for human
behavior recognition. ACM Comput Surv 2014;46:43.
[79] Du Y, Chen F, Xu W, Li Y. Recognizing interaction activities using dynamic bayesian
network. Pattern Recognition, 2006. ICPR 2006. 18th Int. Conf., vol. 1, IEEE; 2006, p.
618–21.
[80] Logan B, Healey J, Philipose M, Tapia EM, Intille S. A long-term evaluation of sensing
modalities for activity recognition. Int. Conf. Ubiquitous Comput., Springer; 2007, p.
483–500.
[81] Bao L, Intille SS. Activity recognition from user-annotated acceleration data. Int. Conf.
Pervasive Comput., Springer; 2004, p. 1–17.
[82] Li F, Dustdar S. Incorporating Unsupervised Learning in Activity Recognition. Act.
Context Represent., 2011.
[83] Chen L, Nugent C. Ontology-based activity recognition in intelligent pervasive
environments. Int J Web Inf Syst 2009;5:410–30.
[84] Dounis AI, Caraiscos C. Advanced control systems engineering for energy and comfort
management in a building environment—A review. Renew Sustain Energy Rev
2009;13:1246–61.
[85] Ippolito MG, Sanseverino ER, Zizzo G. Impact of building automation control systems
and technical building management systems on the energy performance class of
residential buildings: An Italian case study. Energy Build 2014;69:33–40.
[86] Meerbeek B, te Kulve M, Gritti T, Aarts M, van Loenen E, Aarts E. Building automation
and perceived control: A field study on motorized exterior blinds in Dutch offices. Build
Environ 2014;79:66–77.
[87] Brush AJ, Lee B, Mahajan R, Agarwal S, Saroiu S, Dixon C. Home automation in the
wild: challenges and opportunities. Proc. SIGCHI Conf. Hum. Factors Comput. Syst.,
ACM; n.d., p. 2115–24.
[88] Davidoff S, Lee MK, Yiu C, Zimmerman J, Dey AK. Principles of smart home control.
Int. Conf. Ubiquitous Comput., Springer; n.d., p. 19–34.
[89] Hamill L. Controlling smart devices in the home. Inf Soc 2006;22:241–9.
[90] Tambe M, Scerri P, Pynadath D V. Adjustable autonomy for the real world. J Artif Intell
Res 2002;17:171–228.
[91] Penaloza CI, Mae Y, Cuellar FF, Kojima M, Arai T. Brain machine interface system
automation considering user preferences and error perception feedback. IEEE Trans
Autom Sci Eng 2014;11:1275–81.
[92] Mokhtar M, Liu X, Howe J. Multi-agent Gaussian Adaptive Resonance Theory Map for
building energy control and thermal comfort management of UCLan’s WestLakes Samuel
Lindow Building. Energy Build 2014;80:504–16.
[93] Ball M, Callaghan V. Introducing Intelligent Environments, Agents and Autonomy to
139
Users. Intell. Environ. (IE), 2011 7th Int. Conf., IEEE; n.d., p. 382–5.
[94] Röcker C, Janse MD, Portolan N, Streitz N. User requirements for intelligent home
environments: a scenario-driven approach and empirical cross-cultural study. Proc. 2005
Jt. Conf. Smart objects Ambient Intell. Innov. Context. Serv. usages Technol., ACM; n.d.,
p. 111–6.
[95] Bonino D, Corno F, De Russis L. Home energy consumption feedback: A user survey.
Energy Build 2012;47:383–93.
[96] Karjalainen S. Consumer preferences for feedback on household electricity consumption.
Energy Build 2011;43:458–67.
[97] Göçer Ö, Hua Y, Göçer K. Completing the missing link in building design process:
Enhancing post-occupancy evaluation method for effective feedback for building
performance. Build Environ 2015;89:14–27.
[98] Bakker LG, Hoes-van Oeffelen ECM, Loonen R, Hensen JLM. User satisfaction and
interaction with automated dynamic facades: A pilot study. Build Environ 2014;78:44–52.
[99] Karjalainen S. Should it be automatic or manual—The occupant’s perspective on the
design of domestic control systems. Energy Build 2013;65:119–26.
[100] Hart GW. Nonintrusive appliance load monitoring. Proc IEEE 1992;80:1870–91.
[101] Luo D, Norford LK, Shaw SR, Leeb SB. Monitoring HVAC equipment electrical loads
from a centralized location--methods and field test results/Discussion. ASHRAE Trans
2002;108:841.
[102] Berges M, Goldman E, Matthews HS, Soibelman L, Anderson K. User-centered
nonintrusive electricity load monitoring for residential buildings. J Comput Civ Eng
2011;25:471–80.
[103] Berges M, Goldman E, Matthews HS, Soibelman L. Training load monitoring algorithms
on highly sub-metered home electricity consumption data. Tsinghua Sci Technol
2008;13:406–11.
[104] Shaw SR, Leeb SB, Norford LK, Cox RW. Nonintrusive load monitoring and diagnostics
in power systems. IEEE Trans Instrum Meas 2008;57:1445–54.
[105] Jazizadeh F, Becerik-Gerber B, Berges M, Soibelman L. An unsupervised hierarchical
clustering based heuristic algorithm for facilitated training of electricity consumption
disaggregation systems. Adv Eng Informatics 2014;28:311–26.
[106] Bechhofer S. OWL: Web ontology language, Springer; 2009, p. 2008–9.
[107] McGuinness DL, Van Harmelen F. OWL web ontology language overview. W3C
Recomm 2004;10:2004.
[108] Grosof BN, Horrocks I, Volz R, Decker S. Description logic programs: combining logic
programs with description logic. Proc. 12th Int. Conf. World Wide Web, ACM; n.d., p.
48–57.
140
[109] Parzen E. On estimation of a probability density function and mode. Ann Math Stat
1962:1065–76.
[110] Shimazaki H, Shinomoto S. Kernel bandwidth optimization in spike rate estimation. J
Comput Neurosci 2010;29:171–82.
[111] Tapia EM, Intille SS, Larson K. Activity recognition in the home using simple and
ubiquitous sensors. Springer; 2004.
[112] Friedman J, Hastie T, Tibshirani R. The elements of statistical learning. vol. 1. Springer
series in statistics Springer, Berlin; 2001.
[113] Ng A. http://cs229.stanford.edu/notes/cs229-notes10.pdf n.d.
[114] Ahmadi-Karvigh S, Becerik-Gerber B, Soibelman L. A framework for allocating
personalized appliance-level disaggregated electricity consumption to daily activities.
Energy Build 2016;111:337–50.
[115] Bartusch C, Alvehag K. Further exploring the potential of residential demand response
programs in electricity distribution. Appl Energy 2014;125:39–59.
[116] Newsham GR, Bowker BG. The effect of utility time-varying pricing and load control
strategies on residential summer peak electricity use: a review. Energy Policy
2010;38:3289–96.
[117] Power S. Laerence berkeley national laboratory 2015.
[118] Farahani S. ZigBee wireless networks and transceivers. newnes; 2011.
[119] LAMY J-B. Ontology-Oriented Programming for Biomedical Informatics. Transform.
Healthc. with Internet Things Proc. EFMI Spec. Top. Conf. 2016, vol. 221, IOS Press;
2016, p. 64.
[120] Glimm B, Horrocks I, Motik B, Stoilos G, Wang Z. HermiT: An OWL 2 Reasoner. J
Autom Reason 2014;53:245–69. doi:10.1007/s10817-014-9305-1.
[121] Dahmen J, Thomas BL, Cook DJ, Wang X. Activity Learning as a Foundation for
Security Monitoring in Smart Homes. Sensors 2017;17:737. doi:10.3390/s17040737.
[122] Edison SC. Time-Of-Use Residential Rate Plans 2017.
[123] Faruqui A, Sergici S. Household response to dynamic pricing of electricity: a survey of 15
experiments. J Regul Econ 2010;38:193–225.
[124] Georgievski I, Nguyen TA, Aiello M. Combining activity recognition and AI planning for
energy-saving offices. Ubiquitous Intell. Comput. 2013 IEEE 10th Int. Conf. 10th Int.
Conf. Auton. Trust. Comput., IEEE; 2013, p. 238–45.
[125] Garg V, Bansal NK. Smart occupancy sensors to reduce energy consumption. Energy
Build 2000;32:81–7.
[126] Singhvi V, Krause A, Guestrin C, Garrett Jr JH, Matthews HS. Intelligent light control
using sensor networks. Proc. 3rd Int. Conf. Embed. networked Sens. Syst., ACM; n.d., p.
218–29.
141
[127] Kotrlik J, Higgins C. Organizational research: Determining appropriate sample size in
survey research appropriate sample size in survey research. Inf Technol Learn Perform J
2001;19:43.
[128] Jaeger TF. Categorical data analysis: Away from ANOVAs (transformation or not) and
towards logit mixed models. J Mem Lang 2008;59:434–46.
[129] Seltman HJ. Experimental design and analysis. Online Http//Www Stat C Edu/,
Hseltman/309/Book/Book Pdf 2012.
[130] McCulloch CE, Neuhaus JM. Generalized linear mixed models. Wiley Online Library;
2001.
[131] Zuur AF, Ieno EN, Walker NJ, Saveliev AA, Smith GM. Mixed effects models and
extensions in ecology with R. New York: Springer. 574 P 2009.
[132] Bozdogan H. Model selection and Akaike’s information criterion (AIC): The general
theory and its analytical extensions. Psychometrika 1987;52:345–70.
[133] Sauro J, Lewis JR. Quantifying the user experience: Practical statistics for user research.
Elsevier; 2012.
[134] Verbeke G, Molenberghs G. Linear mixed models for longitudinal data. Springer Science
& Business Media; 2009.
[135] Diener E, Sandvik E, Seidlitz L, Diener M. The relationship between income and
subjective well-being: Relative or absolute? Soc Indic Res 1993;28:195–223.
[136] Couch LL, Adams JM, Jones WH. The assessment of trust orientation. J Pers Assess
1996;67:305–23.
[137] Mooradian T, Renzl B, Matzler K. Who trusts? Personality, trust and knowledge sharing.
Manag Learn 2006;37:523–40.
[138] Evans AM, Revelle W. Survey and behavioral measurements of interpersonal trust. J Res
Pers 2008;42:1585–93.
[139] Amichai-Hamburger Y, Vinitzky G. Social network use and personality. Comput Human
Behav 2010;26:1289–95.
[140] Nau DS, Au T-C, Ilghami O, Kuter U, Murdock JW, Wu D, et al. SHOP2: An HTN
planning system. J Artif Intell Res(JAIR) 2003;20:379–404.
[141] Erol K, Hendler J, Nau DS. HTN planning: Complexity and expressivity. AAAI, vol. 94,
n.d., p. 1123–8.
[142] Watkins CJCH, Dayan P. Q-learning. Mach Learn 1992;8:279–92.
doi:10.1007/BF00992698.
[143] Bhattacharya AA, Hong D, Culler D, Ortiz J, Whitehouse K, Wu E. Automated metadata
construction to support portable building applications. Proc. 2nd ACM Int. Conf. Embed.
Syst. Energy-Efficient Built Environ., ACM; 2015, p. 3–12.
[144] Balaji B, Verma C, Narayanaswamy B, Agarwal Y. Zodiac: Organizing large deployment
142
of sensors to create reusable applications for buildings. Proc. 2nd ACM Int. Conf. Embed.
Syst. Energy-Efficient Built Environ., ACM; 2015, p. 13–22.
Asset Metadata
Creator
Ahmadi Karvigh, Simin (author)
Core Title
Intelligent adaptive automation: activity-driven and user-centered building automation
Contributor
Electronically uploaded by the author
(provenance)
School
Andrew and Erna Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Civil Engineering
Publication Date
10/24/2018
Defense Date
12/12/2018
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
activity recognition,adaptive automation,adjustable autonomy,appliance control,automation,behavior-based consumption,building automation,daily activities,disaggregated electricity consumption,dynamic building control,energy awareness,energy efficiency,lighting control,OAI-PMH Harvest,user preferences,waste detection
Format
application/pdf
(imt)
Language
English
Advisor
Becerik-Gerber, Burcin (
committee chair
), Soibelman, Lucio (
committee member
)
Creator Email
ahmadika@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-93643
Unique identifier
UC11675361
Identifier
etd-AhmadiKarv-6909.pdf (filename),usctheses-c89-93643 (legacy record id)
Legacy Identifier
etd-AhmadiKarv-6909.pdf
Dmrecord
93643
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Ahmadi Karvigh, Simin
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Abstract (if available)
Abstract
Buildings account for more than 40% of total energy consumption and 70% of the total electricity consumption in the United States. Among the energy consuming service systems in buildings, lighting systems and appliances together contribute to more than half of the electricity consumption in residential and commercial buildings. Significant contribution of lighting systems and appliances in total electricity consumption of buildings has ignited a growing worldwide interest to find strategies to improve energy efficiency of these service systems. Overall, these strategies follow two major approaches: (1) encouraging occupants to change their wasteful behavior by making them aware of their energy consumption behavior and potential energy savings
Tags
activity recognition
adaptive automation
adjustable autonomy
appliance control
automation
behavior-based consumption
building automation
daily activities
disaggregated electricity consumption
dynamic building control
energy awareness
energy efficiency
lighting control
user preferences
waste detection
Linked assets
University of Southern California Dissertations and Theses