Estimating probabilities from data - Bootstrapping
Contents
Estimating probabilities from data - Bootstrapping¶
You can use the same idea we used in simulations to estimate probabilities from experiments. So, if \(I\) is the background information and \(A\) is a logical proposition that is experimentally testable, then
There is a catch here. The experiments must be independently done. This means that you should prepare any apparatous you are using in exactly the same way for all experiments and that no experiment should affect any other in any way. Most of the experiments we run in a lab are independent. However, this assumption may be wrong for data collected in the wild.
(lecture07:example-high-performance buildings)=
Example - Estimating the probability of excessive energy use¶
Let’s try this in practice using the high-performance building dataset. I’m importing the libraries and loading the data below.
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set(rc={"figure.dpi":100, 'savefig.dpi':300})
sns.set_context('notebook')
sns.set_style("ticks")
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('retina', 'svg')
import requests
import os
def download(url, local_filename=None):
"""
Downloads the file in the ``url`` and saves it in the current working directory.
"""
data = requests.get(url)
if local_filename is None:
local_filename = os.path.basename(url)
with open(local_filename, 'wb') as fd:
fd.write(data.content)
# The url of the file we want to download
url = 'https://raw.githubusercontent.com/PurdueMechanicalEngineering/me-297-intro-to-data-science/master/data/temperature_raw.xlsx'
download(url)
import numpy as np
import pandas as pd
df = pd.read_excel('temperature_raw.xlsx')
df = df.dropna(axis=0)
df.date = pd.to_datetime(df['date'], format='%Y-%m-%d')
The background information \(I\) is as follows:
A random household is picked on a random week during the heating season. The heating season is defined to be the time of the year during which the weekly average of the external temperature is less than 55 degrees F.
The logical proposition \(A\) is:
The weekly HVAC energy consumption of the household exceeds 400 kWh.
First, we start by selecting the subset of the data that pertains to the heating season.
df_heating = df[df['t_out'] < 55]
df_heating
household | date | score | t_out | t_unit | hvac | |
---|---|---|---|---|---|---|
0 | a1 | 2018-01-07 | 100.0 | 4.283373 | 66.693229 | 246.473231 |
1 | a10 | 2018-01-07 | 100.0 | 4.283373 | 66.356134 | 5.492116 |
2 | a11 | 2018-01-07 | 58.0 | 4.283373 | 71.549132 | 402.094327 |
3 | a12 | 2018-01-07 | 64.0 | 4.283373 | 73.429514 | 211.692244 |
4 | a13 | 2018-01-07 | 100.0 | 4.283373 | 63.923937 | 0.850536 |
... | ... | ... | ... | ... | ... | ... |
5643 | c44 | 2020-02-25 | 59.0 | 43.642388 | 76.494637 | 19.135139 |
5644 | c45 | 2020-02-25 | 87.0 | 43.642388 | 71.165052 | 30.794281 |
5646 | c47 | 2020-02-25 | 97.0 | 43.642388 | 68.603287 | 5.339391 |
5647 | c48 | 2020-02-25 | 92.0 | 43.642388 | 73.429239 | 18.040078 |
5649 | c50 | 2020-02-25 | 59.0 | 43.642388 | 77.716955 | 14.405155 |
2741 rows × 6 columns
We have 2741 such measurements. Now we want to pick a random household on a random week. Unfortunately, in this dataset this is not exactly equivalent to picking a row at random because there are some missing data. So, if we wanted to be very picky we would have to find the number of weeks during which we have data from all households. However, we won’t be so picky. The result would not be far off what we estimate by randomly picking rows.
Okay, so here is what we are going to do. We will pick \(N\) rows at random. For each one of the rows we are going to test if the logical proposition \(A\) is True. Finally, we are going to divide the number of times \(A\) is True with \(N\). Alright, let’s do it.
# The number of rows to pick
N = 500
# Each row corresponds to an integer from 0 to 2740. Pick
# N such integers at random
rows = np.random.randint(0, df_heating.shape[0], size=N)
rows
array([2587, 1052, 2286, 1129, 583, 1962, 179, 1660, 1036, 22, 798,
1486, 2555, 605, 960, 907, 105, 2275, 2717, 2293, 760, 2014,
1960, 2534, 672, 2244, 875, 2373, 1688, 1456, 25, 647, 1955,
887, 1784, 1329, 26, 2019, 333, 1063, 713, 2602, 1694, 2697,
345, 369, 1864, 1355, 2330, 105, 1351, 2659, 264, 777, 1863,
1328, 1713, 199, 1993, 1598, 1357, 2070, 424, 515, 2024, 1923,
757, 1269, 2137, 1877, 2619, 1754, 2233, 2516, 931, 1339, 304,
547, 597, 736, 1059, 1079, 1541, 496, 318, 483, 1882, 2155,
1092, 210, 1111, 2350, 341, 1010, 134, 1007, 1998, 1071, 1298,
1406, 225, 2002, 1960, 1712, 315, 2054, 807, 2399, 2475, 1145,
2593, 1293, 2307, 768, 1102, 2394, 502, 1701, 2720, 1451, 284,
1079, 223, 2314, 1349, 1930, 2145, 1038, 1246, 306, 2087, 66,
1671, 2360, 885, 1816, 1837, 671, 2528, 1795, 2023, 184, 612,
1867, 1180, 409, 1434, 2330, 2267, 1870, 1812, 520, 1173, 968,
201, 766, 323, 458, 1432, 70, 1311, 2008, 1693, 1905, 2307,
1354, 805, 2534, 964, 625, 1546, 168, 469, 797, 28, 1329,
215, 648, 2137, 1204, 2198, 1904, 294, 1253, 1798, 1763, 2130,
1195, 2073, 2412, 1217, 743, 771, 2338, 1411, 1097, 2344, 1398,
1131, 1614, 1277, 1341, 259, 1855, 522, 1920, 714, 437, 38,
83, 664, 902, 433, 1413, 1027, 1935, 1766, 2514, 2064, 2376,
1765, 1394, 113, 749, 822, 1531, 916, 46, 179, 2181, 2411,
681, 1236, 215, 1563, 2117, 478, 892, 1913, 167, 853, 1243,
1621, 1125, 442, 960, 2244, 2404, 922, 1696, 1005, 93, 1945,
2196, 1634, 369, 1246, 2478, 831, 1369, 1039, 1795, 664, 706,
2608, 1507, 2341, 2519, 1573, 2229, 471, 1521, 2478, 453, 2010,
1205, 853, 2107, 1774, 2346, 1776, 75, 252, 1397, 1306, 2326,
2365, 271, 146, 2157, 2425, 2601, 2667, 846, 107, 2272, 1683,
1147, 2087, 1331, 978, 370, 665, 315, 1044, 13, 1415, 803,
1959, 2145, 2073, 628, 860, 512, 657, 1286, 214, 2287, 2450,
405, 1160, 764, 800, 1771, 1449, 1912, 2732, 980, 78, 2398,
1649, 2410, 555, 526, 132, 2188, 54, 839, 622, 2200, 914,
1157, 1922, 1307, 2543, 2480, 1452, 1047, 148, 1562, 2337, 2716,
2686, 2307, 1739, 949, 353, 940, 2593, 874, 296, 1520, 1136,
1006, 2041, 2078, 758, 1157, 2180, 1622, 1582, 2159, 1610, 179,
1149, 176, 18, 1655, 2004, 42, 1495, 2673, 2603, 2455, 607,
2334, 1533, 525, 2369, 2713, 371, 2502, 2711, 598, 966, 784,
2127, 327, 85, 1084, 782, 157, 469, 1052, 1985, 723, 2277,
1844, 572, 2364, 305, 1704, 312, 1868, 1079, 1117, 104, 173,
1919, 133, 2174, 2489, 1480, 715, 495, 1236, 1904, 1793, 1795,
788, 2179, 2430, 2050, 2604, 1480, 2350, 178, 1094, 101, 491,
746, 2193, 775, 98, 2608, 197, 2131, 666, 2560, 2398, 1492,
2318, 2320, 2001, 1789, 1245, 79, 223, 1982, 1092, 492, 1187,
829, 2647, 1479, 935, 346, 1317, 861, 2406, 2314, 384, 1119,
917, 1680, 1030, 75, 798, 1950, 1803, 2295, 1499, 2700, 399,
1738, 2056, 231, 454, 2431, 2525, 2372, 81, 2660, 321, 2635,
2419, 2252, 1395, 1495, 2243])
Now we need to pick the rows of df_heating
that have those indices.
We can use pandas.Dataframe.iloc to do this:
df_heating_exp = df_heating.iloc[rows]
df_heating_exp
household | date | score | t_out | t_unit | hvac | |
---|---|---|---|---|---|---|
5456 | a15 | 2020-02-09 | 80.0 | 38.123983 | 75.235615 | 103.460516 |
2384 | c35 | 2018-12-02 | 98.0 | 36.919444 | 70.374578 | 103.333777 |
5075 | b26 | 2019-12-15 | 78.0 | 36.130754 | 73.945914 | 75.234305 |
2470 | b21 | 2018-12-16 | 22.0 | 35.100620 | 76.782192 | 0.000000 |
645 | c46 | 2018-04-01 | 77.0 | 45.607391 | 72.822520 | 46.597443 |
... | ... | ... | ... | ... | ... | ... |
5243 | c44 | 2020-01-05 | 53.0 | 42.495313 | 77.833209 | 117.672719 |
5033 | c34 | 2019-12-08 | 94.0 | 37.535392 | 69.029183 | 173.136284 |
2763 | a7 | 2019-01-27 | 45.0 | 21.504117 | 77.148845 | 260.506823 |
2877 | b28 | 2019-02-10 | 49.0 | 38.192808 | 73.968347 | 12.313202 |
5022 | b23 | 2019-12-08 | 66.0 | 37.535392 | 77.679315 | 73.887601 |
500 rows × 6 columns
Now let’s evaluate the value of the logical proposition \(A\) for each one of these rows.
df_heating_exp_A = df_heating_exp['hvac'] > 300
df_heating_exp_A
5456 False
2384 False
5075 False
2470 False
645 False
...
5243 False
5033 False
2763 False
2877 False
5022 False
Name: hvac, Length: 500, dtype: bool
Now, we need to count the number of times the logical proposition was True
.
We can do this either with the function pandas.Dataframe.value_counts():
df_heating_exp_A_counts = df_heating_exp_A.value_counts()
df_heating_exp_A_counts
False 466
True 34
Name: hvac, dtype: int64
This returned both the True
and the False
counts.
To get just the True
counts:
number_of_A_true = df_heating_exp_A_counts[True]
number_of_A_true
34
And now we can estimate the probability by dividing the number of times \(A\) was True
with the number of randomly selected rows \(N\).
We get this:
p_A_g_I = number_of_A_true / N
print('p(A|I) ~= {0:1.2f}'.format(p_A_g_I))
p(A|I) ~= 0.07
Nice! This was easy. Now, you may say why didn’t you pick all rows? I could have, and if I had a really really big number of rows I would be getting something very good estimate of the probability. But you never know if you have enough data… It is not like the simulated experiment where we could do as many runs as we liked. Most of the times, we have a finite amount of data that is not good enough for an accurate estimate. In other words, there is a bit of epistemic uncertainty in our estimate of the probability. There is something we can do to estimate this epistemic uncertainty. Let me show you.
First, put everything we did above in a nice function:
def estimate_p_A_g_I(N, df_heating):
"""
Estimates the probability of A given I by randomly picking N rows
from the data frame df_heating.
Arguments:
N - The number of rows to pick at random.
df_heating - The data frame containing the heating data.
Returns: The number of rows with A True divided by N.
"""
rows = np.random.randint(0, df_heating.shape[0], size=N)
df_heating_exp = df_heating.iloc[rows]
df_heating_exp_A = df_heating_exp['hvac'] > 300
df_heating_exp_A_counts = df_heating_exp_A.value_counts()
number_of_A_true = df_heating_exp_A_counts[True] if True in df_heating_exp_A_counts.keys() else 0
p_A_g_I = number_of_A_true / N
return p_A_g_I
Now we can call this function as many times as we want. Each time we get an estimate of the probability of A given I. It is going to be a different estimate every time because the rows are selected at random. Here it is 10 times:
for i in range(10):
p_A_g_I = estimate_p_A_g_I(500, df_heating)
print('{0:d} estimate of p(A|I): {1:1.3f}'.format(i+1, p_A_g_I))
1 estimate of p(A|I): 0.064
2 estimate of p(A|I): 0.076
3 estimate of p(A|I): 0.058
4 estimate of p(A|I): 0.058
5 estimate of p(A|I): 0.058
6 estimate of p(A|I): 0.062
7 estimate of p(A|I): 0.086
8 estimate of p(A|I): 0.096
9 estimate of p(A|I): 0.070
10 estimate of p(A|I): 0.068
Alright, every time a different number. To get a sense of the epistemic uncertainty, we can do it many many times, say 1000 times, and plot a histogram of our estimates. Like this:
# A place to store the estimates
p_A_g_Is = []
# The number of rows we sample every time
N = 500
# Put 1000 estimates in there
for i in range(1000):
p_A_g_I = estimate_p_A_g_I(N, df_heating)
p_A_g_Is.append(p_A_g_I)
# And now do the histogram
fig, ax = plt.subplots()
ax.hist(p_A_g_Is)
ax.set_xlabel('$p(A|I)$')
ax.set_ylabel('Count')
ax.set_title('Bootstrap estimate $N = {0:d}$'.format(N));
So looking at this, we can say that the \(p(A|I)\) is around 7%, but it could be as low as 5% or as high as 9%. This way of estimating the uncertainty of statistics by resampling the data is called bootstrapping.
Questions¶
Rerun the bootstrapping estimate of \(p(A|I)\) using \(N=100\) (very small). What happens to the range of possibilities?
Rerun the bootstrapping estimate of \(p(A|I)\) using \(N=1000\) (reasonable). Did the range of possibilities change compared to \(N=500\)?
Rerun the bootstrapping estimate of \(p(A|I)\) using \(N=2741\) (the maximum, and reasonable). What about now? How does the range look-like now? Why, didn’t the uncertainty collapse completly? Hint: Think about how we sample the rows of the data frame. Is there a chance that we miss some?
What if you go to \(N=5000\)?