1 Youth Risk Behavior Surveillance

Every two years, the Centers for Disease Control and Prevention conduct the Youth Risk Behavior Surveillance System (YRBSS) survey, where it takes data from high schoolers (9th through 12th grade), to analyze health patterns. You will work with a selected group of variables from a random sample of observations during one of the years the YRBSS was conducted.

1.1 Load the data

This data is part of the openintro textbook and we can load and inspect it. There are observations on 13 different variables, some categorical and some numerical. The meaning of each variable can be found by bringing up the help file:

?yrbss

data(yrbss)
glimpse(yrbss)
## Rows: 13,583
## Columns: 13
## $ age                      <int> 14, 14, 15, 15, 15, 15, 15, 14, 15, 15, 15, …
## $ gender                   <chr> "female", "female", "female", "female", "fem…
## $ grade                    <chr> "9", "9", "9", "9", "9", "9", "9", "9", "9",…
## $ hispanic                 <chr> "not", "not", "hispanic", "not", "not", "not…
## $ race                     <chr> "Black or African American", "Black or Afric…
## $ height                   <dbl> NA, NA, 1.73, 1.60, 1.50, 1.57, 1.65, 1.88, …
## $ weight                   <dbl> NA, NA, 84.4, 55.8, 46.7, 67.1, 131.5, 71.2,…
## $ helmet_12m               <chr> "never", "never", "never", "never", "did not…
## $ text_while_driving_30d   <chr> "0", NA, "30", "0", "did not drive", "did no…
## $ physically_active_7d     <int> 4, 2, 7, 0, 2, 1, 4, 4, 5, 0, 0, 0, 4, 7, 7,…
## $ hours_tv_per_school_day  <chr> "5+", "5+", "5+", "2", "3", "5+", "5+", "5+"…
## $ strength_training_7d     <int> 0, 0, 0, 0, 1, 0, 2, 0, 3, 0, 3, 0, 0, 7, 7,…
## $ school_night_hours_sleep <chr> "8", "6", "<5", "6", "9", "8", "9", "6", "<5…

Before you carry on with your analysis, it’s is always a good idea to check with skimr::skim() to get a feel for missing values, summary statistics of numerical variables, and a very rough histogram.

1.2 Exploratory Data Analysis

You will first start with analyzing the weight of participants in kilograms. Using visualization and summary statistics, describe the distribution of weights. How many observations are we missing weights from?

From the summary statistics, we can see that there are 1004 missing observations for weight. The histogram also shows that weights are positive skewed, meaning that the majority is relatively light weight.

summary(yrbss$weight) #Summary tells us that there are 1,004 observations missing
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.    NA's 
##      30      56      64      68      76     181    1004
yrbss %>% 
  filter(!is.na(weight)) %>% 
  ggplot(aes(x=weight)) +
  geom_histogram() +
  labs(title="Weight distribution has positive skew") +
  xlab("Weight") +
  ylab("Number of observations")

Next, consider the possible relationship between a high schooler’s weight and their physical activity. Plotting the data is a useful first step because it helps us quickly visualize trends, identify strong associations, and develop research questions.

Let’s create a new variable physical_3plus, which will be yes if they are physically active for at least 3 days a week, and no otherwise.

yrbss <- yrbss %>% 
  mutate(physical_3plus = ifelse(physically_active_7d >= 3, "yes", "no"))

yrbss %>% filter(!is.na(physical_3plus)) %>% 
  group_by(physical_3plus) %>% 
  summarise(count = n()) %>% 
  mutate(prop= count/sum(count))
## # A tibble: 2 x 3
##   physical_3plus count  prop
##   <chr>          <int> <dbl>
## 1 no              4404 0.331
## 2 yes             8906 0.669

Can you provide a 95% confidence interval for the population proportion of high schools that are NOT active 3 or more days per week?

prop.test(count(yrbss$physical_3plus=="no"), 
          nrow(yrbss %>% filter(!is.na(physical_3plus))),
          conf.level = 0.95)
## 
##  1-sample proportions test with continuity correction
## 
## data:  count out of nrowyrbss$physical_3plus == "no" out of yrbss %>% filter(!is.na(physical_3plus))
## X-squared = 1522, df = 1, p-value <2e-16
## alternative hypothesis: true p is not equal to 0.5
## 95 percent confidence interval:
##  0.323 0.339
## sample estimates:
##     p 
## 0.331

Make a boxplot of physical_3plus vs. weight. Is there a relationship between these two variables? What did you expect and why?

We expected less physically active people to weight more but we found this to be the opposite. This is most likely due to muscle weighting more than fat/ skin.

yrbss %>% 
  ggplot(aes(x=physical_3plus, y=weight)) +
  geom_boxplot() +
  labs(title="Muscles are heavier than fat") +
  xlab("More than 3x/ week physically active") +
  ylab("Weight")

1.3 Confidence Interval

Boxplots show how the medians of the two distributions compare, but we can also compare the means of the distributions using either a confidence interval or a hypothesis test. Note that when we calculate the mean/SD, etc weight in these groups using the mean function, we must ignore any missing values by setting the na.rm = TRUE.

yrbss %>%
  group_by(physical_3plus) %>%
  filter(!is.na(physical_3plus)) %>% 
  summarise(mean_weight = mean(weight, na.rm = TRUE),
            sd_weight = sd(weight, na.rm=TRUE),
            count = n(),
            se_weight = sd_weight/sqrt(count),
            t_critical = qt(0.975, count-1), 
            margin_of_error = t_critical * se_weight,
            lower = mean_weight - t_critical * se_weight,
            upper = mean_weight + t_critical * se_weight
            )
## # A tibble: 2 x 9
##   physical_3plus mean_weight sd_weight count se_weight t_critical
##   <chr>                <dbl>     <dbl> <int>     <dbl>      <dbl>
## 1 no                    66.7      17.6  4404     0.266       1.96
## 2 yes                   68.4      16.5  8906     0.175       1.96
## # … with 3 more variables: margin_of_error <dbl>, lower <dbl>, upper <dbl>

There is an observed difference of about 1.77kg (68.44 - 66.67), and we notice that the two confidence intervals do not overlap. It seems that the difference is at least 95% statistically significant. Let us also conduct a hypothesis test.

1.4 Hypothesis test with formula

Write the null and alternative hypotheses for testing whether mean weights are different for those who exercise at least 3 times a week and those who don’t.

Null hypothesis: Weights don’t differ from each other Alternative hypothesis: Weights are different from each other

t.test(weight ~ physical_3plus, data = yrbss)
## 
##  Welch Two Sample t-test
## 
## data:  weight by physical_3plus
## t = -5, df = 7479, p-value = 9e-08
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  -2.42 -1.12
## sample estimates:
##  mean in group no mean in group yes 
##              66.7              68.4

1.5 Hypothesis test with infer

Next, we will introduce a new function, hypothesize, that falls into the infer workflow. You will use this method for conducting hypothesis tests.

But first, we need to initialize the test, which we will save as obs_diff.

obs_diff <- yrbss %>%
  specify(weight ~ physical_3plus) %>%
  calculate(stat = "diff in means", order = c("yes", "no"))

obs_diff
## # A tibble: 1 x 1
##    stat
##   <dbl>
## 1  1.77

Notice how you can use the functions specify and calculate again like you did for calculating confidence intervals. Here, though, the statistic you are searching for is the difference in means, with the order being yes - no != 0.

After you have initialized the test, you need to simulate the test on the null distribution, which we will save as null.

null_dist <- yrbss %>%
  specify(weight ~ physical_3plus) %>%
  hypothesize(null = "independence") %>%
  generate(reps = 1000, type = "permute") %>%
  calculate(stat = "diff in means", order = c("yes", "no"))

null_dist
## # A tibble: 1,000 x 2
##    replicate    stat
##        <int>   <dbl>
##  1         1  0.326 
##  2         2 -0.311 
##  3         3  0.286 
##  4         4 -0.290 
##  5         5 -0.273 
##  6         6 -0.348 
##  7         7  0.224 
##  8         8 -0.473 
##  9         9 -0.236 
## 10        10 -0.0533
## # … with 990 more rows

Here, hypothesize is used to set the null hypothesis as a test for independence, i.e., that there is no difference between the two population means. In one sample cases, the null argument can be set to point to test a hypothesis relative to a point estimate.

Also, note that the type argument within generate is set to permute, which is the argument when generating a null distribution for a hypothesis test.

We can visualize this null distribution with the following code:

ggplot(data = null_dist, aes(x = stat)) +
  geom_histogram()

Now that the test is initialized and the null distribution formed, we can visualise to see how many of these null permutations have a difference of at least obs_stat of 1.77?

We can also calculate the p-value for your hypothesis test using the function infer::get_p_value().

null_dist %>% visualize() +
  shade_p_value(obs_stat = obs_diff, direction = "two-sided")

null_dist %>%
  get_p_value(obs_stat = obs_diff, direction = "two_sided")
## # A tibble: 1 x 1
##   p_value
##     <dbl>
## 1       0

This the standard workflow for performing hypothesis tests.

2 IMDB ratings: Differences between directors

Recall the IMBD ratings data. I would like you to explore whether the mean IMDB rating for Steven Spielberg and Tim Burton are the same or not. I have already calculated the confidence intervals for the mean ratings of these two directors and as you can see they overlap.

knitr::include_graphics(here::here("images", "directors.png"), error = FALSE)

First, I would like you to reproduce this graph. You may find geom_errorbar() and geom_rect() useful.

In addition, you will run a hpothesis test. You should use both the t.test command and the infer package to simulate from a null distribution, where you assume zero difference between the two.

Before anything, write down the null and alternative hypotheses, as well as the resulting test statistic and the associated t-stat or p-value. At the end of the day, what do you conclude?

You can load the data and examine its structure

movies <- read_csv("data/movies.csv")
glimpse(movies)
## Rows: 2,961
## Columns: 11
## $ title               <chr> "Avatar", "Titanic", "Jurassic World", "The Aveng…
## $ genre               <chr> "Action", "Drama", "Action", "Action", "Action", …
## $ director            <chr> "James Cameron", "James Cameron", "Colin Trevorro…
## $ year                <dbl> 2009, 1997, 2015, 2012, 2008, 1999, 1977, 2015, 2…
## $ duration            <dbl> 178, 194, 124, 173, 152, 136, 125, 141, 164, 93, …
## $ gross               <dbl> 7.61e+08, 6.59e+08, 6.52e+08, 6.23e+08, 5.33e+08,…
## $ budget              <dbl> 2.37e+08, 2.00e+08, 1.50e+08, 2.20e+08, 1.85e+08,…
## $ cast_facebook_likes <dbl> 4834, 45223, 8458, 87697, 57802, 37723, 13485, 92…
## $ votes               <dbl> 886204, 793059, 418214, 995415, 1676169, 534658, …
## $ reviews             <dbl> 3777, 2843, 1934, 2425, 5312, 3917, 1752, 1752, 3…
## $ rating              <dbl> 7.9, 7.7, 7.0, 8.1, 9.0, 6.5, 8.7, 7.5, 8.5, 7.2,…

Your R code and analysis should go here. If you want to insert a blank chunk of R code you can just hit Ctrl/Cmd+Alt+I

movies_filtered <- movies %>% 
  filter(director %in% c("Tim Burton", "Steven Spielberg")) %>% 
  group_by(director) %>% 
  summarise(
    mean = mean(rating),
    sd = sd(rating),
    count = n(),
    t_critical = qt(0.975, count-1),
    se = sd/sqrt(count),
    margin_of_error = t_critical * se,
    ci_lower = mean - margin_of_error,
    ci_upper = mean + margin_of_error
  )

movies_filtered
## # A tibble: 2 x 9
##   director   mean    sd count t_critical    se margin_of_error ci_lower ci_upper
##   <chr>     <dbl> <dbl> <int>      <dbl> <dbl>           <dbl>    <dbl>    <dbl>
## 1 Steven S…  7.57 0.695    23       2.07 0.145           0.301     7.27     7.87
## 2 Tim Burt…  6.93 0.749    16       2.13 0.187           0.399     6.53     7.33
movies_filtered %>% 
  ggplot(aes(x=mean, y=reorder(director, mean), color=director)) +
  geom_point() +
  geom_errorbar(aes(xmin=movies_filtered$ci_lower, xmax=movies_filtered$ci_upper), width=0.1) +
  geom_rect(aes(xmin=max(movies_filtered$ci_lower), xmax=min(movies_filtered$ci_upper), ymin=-Inf, ymax=Inf), color=NA, alpha = 0.2 ) +
  geom_text(aes(label = round(mean, digits = 2), vjust=-1, size=7)) +
  geom_text(aes(label = round(ci_lower, digits = 2), hjust=3, vjust=-1)) +
  geom_text(aes(label = round(ci_upper, digits = 2),hjust=-2, vjust=-1)) +
  ylab("") +
  xlab("Mean IMDB rating") +
  labs(title = "Do Spielberg and Burton have the same IMDB ratings?", 
       subtitle = "95% confidence intervals overlap") +
  theme_bw() +
  theme(legend.position = "none",
        plot.margin = margin(2,.1,2,.1, "cm")) 

3 Omega Group plc- Pay Discrimination

At the last board meeting of Omega Group Plc., the headquarters of a large multinational company, the issue was raised that women were being discriminated in the company, in the sense that the salaries were not the same for male and female executives. A quick analysis of a sample of 50 employees (of which 24 men and 26 women) revealed that the average salary for men was about 8,700 higher than for women. This seemed like a considerable difference, so it was decided that a further analysis of the company salaries was warranted.

You are asked to carry out the analysis. The objective is to find out whether there is indeed a significant difference between the salaries of men and women, and whether the difference is due to discrimination or whether it is based on another, possibly valid, determining factor.

3.1 Loading the data

omega <- read_csv(here::here("data", "omega.csv"))
glimpse(omega) # examine the data frame
## Rows: 50
## Columns: 3
## $ salary     <dbl> 81894, 69517, 68589, 74881, 65598, 76840, 78800, 70033, 63…
## $ gender     <chr> "male", "male", "male", "male", "male", "male", "male", "m…
## $ experience <dbl> 16, 25, 15, 33, 16, 19, 32, 34, 1, 44, 7, 14, 33, 19, 24, …

3.2 Relationship Salary - Gender ?

The data frame omega contains the salaries for the sample of 50 executives in the company. Can you conclude that there is a significant difference between the salaries of the male and female executives?

Note that you can perform different types of analyses, and check whether they all lead to the same conclusion

. Confidence intervals . Hypothesis testing . Correlation analysis . Regression

Calculate summary statistics on salary by gender. Also, create and print a dataframe where, for each gender, you show the mean, SD, sample size, the t-critical, the SE, the margin of error, and the low/high endpoints of a 95% condifence interval

# Summary Statistics of salary by gender:
mosaic::favstats (salary ~ gender, data=omega)
##   gender   min    Q1 median    Q3   max  mean   sd  n missing
## 1 female 47033 60338  64618 70033 78800 64543 7567 26       0
## 2   male 54768 68331  74675 78568 84576 73239 7463 24       0
# We now calculate the t-critical value, the standard error, the margin of error and the low/high endpoints of a 95% condifence interval:

omegastats<-omega%>%
select(salary,gender)%>%
  group_by(gender)%>%
  summarise(
    mean=mean(salary), # mean calculation
    SD=sd(salary), # standard deviation calculation
    SampleSize=n(), # sample size counting
    t_crit=qt(0.975,SampleSize-1), # t-critical value calculation
    SE=SD/sqrt(SampleSize), # standard error calculation
    MarginError=t_crit*SE, # margin error calculation
    Lowend=mean-MarginError, # low end point of the confidence interval
    Highend=mean+MarginError) # high end point of the confidence interval

omegastats
## # A tibble: 2 x 9
##   gender   mean    SD SampleSize t_crit    SE MarginError Lowend Highend
##   <chr>   <dbl> <dbl>      <int>  <dbl> <dbl>       <dbl>  <dbl>   <dbl>
## 1 female 64543. 7567.         26   2.06 1484.       3056. 61486.  67599.
## 2 male   73239. 7463.         24   2.07 1523.       3151. 70088.  76390.

What can you conclude from your analysis? A couple of sentences would be enough

It can be stated that there is a significant difference since the confidence intervals do not overlap. The high end of female salaries is 67,599 whereas the low end for male salaries is 70,088.

You can also run a hypothesis testing, assuming as a null hypothesis that the mean difference in salaries is zero, or that, on average, men and women make the same amount of money. You should tun your hypothesis testing using t.test() and with the simulation method from the infer package.

# hypothesis testing using t.test() 

t.test(salary ~ gender, data = omega)
## 
##  Welch Two Sample t-test
## 
## data:  salary by gender
## t = -4, df = 48, p-value = 2e-04
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  -12973  -4420
## sample estimates:
## mean in group female   mean in group male 
##                64543                73239
# hypothesis testing using infer package

initialize_null <-omega%>%
specify(salary ~ gender)%>%
calculate(stat="diff in means",order=c("female","male")) 

#Simulate the hypothesis test

salaries_null<-omega%>%
  specify(salary ~ gender)%>%
  hypothesize(null="independence")%>%
  generate(reps=10000,type="permute")%>%
  calculate(stat="diff in means",order=c("female","male"))

#Plotting the distribution and obtaining p value

salaries_null %>% visualize() +
  shade_p_value(obs_stat = initialize_null, direction = "two-sided", color="black")+
  labs(x="Difference between mean salaries",y="# of repetitions",title="Any significant difference between men and women's salaries?",subtitle="Null hypothesis distribution and p-value")+theme_economist()

#Getting the p value

salaries_null %>% get_p_value(obs_stat = initialize_null, direction = "two_sided")
## # A tibble: 1 x 1
##   p_value
##     <dbl>
## 1       0

What can you conclude from your analysis? A couple of sentences would be enough

It can be concluded that we can reject the null hypothesis with a 95% confidence interval. We can observe the p-value being really far from 0, which would mean that there are no significant difference among the means. For this reason, we can affirm that in statistical terms there is certainly a significant difference between male and female salaries. Is the reason discrimination? We can’t say, a further analysis must be performed in order to find out. We will now check the relationship with experience and gender in order to go deeper in our analysis.

3.3 Relationship Experience - Gender?

At the board meeting, someone raised the issue that there was indeed a substantial difference between male and female salaries, but that this was attributable to other reasons such as differences in experience. A questionnaire send out to the 50 executives in the sample reveals that the average experience of the men is approximately 21 years, whereas the women only have about 7 years experience on average (see table below).

# Summary Statistics of salary by gender
favstats (experience ~ gender, data=omega)
##   gender min    Q1 median   Q3 max  mean    sd  n missing
## 1 female   0  0.25    3.0 14.0  29  7.38  8.51 26       0
## 2   male   1 15.75   19.5 31.2  44 21.12 10.92 24       0
omegastats2<-omega%>%
select(experience,gender)%>%
  group_by(gender)%>%
  summarise(
    mean=mean(experience), #mean calculation
    SD=sd(experience), # standard deviation calculation
    SampleSize=n(), # sample size counting
    t_crit=qt(0.975,SampleSize-1), # t critical calculation
    SE=SD/sqrt(SampleSize), # standard error calculation
  MarginError=t_crit*SE, # margin error calculation
  LowEnd=mean-MarginError, # low end of the confidence interval
  HighEnd=mean+MarginError) # high end of the confidence interval

omegastats2
## # A tibble: 2 x 9
##   gender  mean    SD SampleSize t_crit    SE MarginError LowEnd HighEnd
##   <chr>  <dbl> <dbl>      <int>  <dbl> <dbl>       <dbl>  <dbl>   <dbl>
## 1 female  7.38  8.51         26   2.06  1.67        3.44   3.95    10.8
## 2 male   21.1  10.9          24   2.07  2.23        4.61  16.5     25.7

Based on this evidence, can you conclude that there is a significant difference between the experience of the male and female executives? Perform similar analyses as in the previous section. Does your conclusion validate or endanger your conclusion about the difference in male and female salaries?

Due to the fact that the confidence intervals do not overlap (10.2 high end for females and 17.31 low end for males) we can state that there is a significant difference in experience. This situation might also explain the difference in salaries that we have seen in our previous analysis, but we have to perform additional analysis in order to reach a proper conclusion.

We now perform the same analysis with hypothesis testing:

# Hypothesis testing using t.test() 
t.test(experience ~ gender, data = omega)
## 
##  Welch Two Sample t-test
## 
## data:  experience by gender
## t = -5, df = 43, p-value = 1e-05
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  -19.35  -8.13
## sample estimates:
## mean in group female   mean in group male 
##                 7.38                21.12
# Hypothesis testing using infer package

initialize_null <- omega %>%
specify(experience ~ gender)%>%
calculate(stat="diff in means",order=c("female","male")) 

#Simulate hypothesis testing

experience_null <- omega %>%
  specify(experience ~ gender) %>%
  hypothesize(null="independence") %>%
  generate(reps=1000,type="permute") %>%
  calculate(stat="diff in means",order=c("male","female"))

#Plotting the distribution and getting p value

experience_null %>% visualize() +
  shade_p_value(obs_stat = initialize_null, direction = "two-sided", color ="black")+
    labs(x="Difference between mean experience in genders",y="# of repetitions",title="any significant difference between men and women's experience?",subtitle="Null hypothesis distribution and p-value")+theme_economist()

#Obtaining the p value

experience_null %>% get_p_value(obs_stat = initialize_null, direction = "two_sided")
## # A tibble: 1 x 1
##   p_value
##     <dbl>
## 1       0

We get the same conclusions as in the analysis above, further analysis is required before reaching a final conclusion.

3.4 Relationship Salary - Experience ?

Someone at the meeting argues that clearly, a more thorough analysis of the relationship between salary and experience is required before any conclusion can be drawn about whether there is any gender-based salary discrimination in the company.

Analyse the relationship between salary and experience. Draw a scatterplot to visually inspect the data

omega %>% 
ggplot(aes(x=experience,y=salary)) + theme_economist() + geom_point() + geom_line() + labs(title="Experience vs salary relationship",x="Years of Experience",y="Salary $")

3.5 Check correlations between the data

You can use GGally:ggpairs() to create a scatterplot and correlation matrix. Essentially, we change the order our variables will appear in and have the dependent variable (Y), salary, as last in our list. We then pipe the dataframe to ggpairs() with aes arguments to colour by gender and make ths plots somewhat transparent (alpha = 0.3).

omega %>% 
  select(gender, experience, salary) %>% #order variables they will appear in ggpairs()
  ggpairs(aes(colour=gender, alpha = 0.3))+
  theme_bw()

Look at the salary vs experience scatterplot. What can you infer from this plot? Explain in a couple of sentences

By observing the scatterplot it can be stated that there is a positive relationship between salary and experience. Furthermore, most of the male employees have at least 5 years of experience while most female employees have less than 5 years of experience. Of those female employees that have experience between 5 and 30 years, a positive relationship between the two variables can also be observed. For this reason, we can conclude that the difference in salaries between men and female employees is due to experience and not due to discrimination.

4 Challenge 1: Yield Curve inversion

Every so often, we hear warnings from commentators on the “inverted yield curve” and its predictive power with respect to recessions. An explainer what a inverted yield curve is can be found here. If you’d rather listen to something, here is a great podcast from NPR on yield curve indicators

In addition, many articles and commentators think that, e.g., Yield curve inversion is viewed as a harbinger of recession. One can always doubt whether inversions are truly a harbinger of recessions, and use the attached parable on yield curve inversions.

In our case we will look at US data and use the FRED database to download historical yield curve rates, and plot the yield curves since 1999 to see when the yield curves flatten. If you want to know more, a very nice article that explains the yield curve is and its inversion can be found here. At the end of this chllenge you should produce this chart

First, we will use the tidyquant package to download monthly rates for different durations.

# Get a list of FRED codes for US rates and US yield curve; choose monthly frequency
# to see, eg., the 3-month T-bill https://fred.stlouisfed.org/series/TB3MS
tickers <- c('TB3MS', # 3-month Treasury bill (or T-bill)
             'TB6MS', # 6-month
             'GS1',   # 1-year
             'GS2',   # 2-year, etc....
             'GS3',
             'GS5',
             'GS7',
             'GS10',
             'GS20',
             'GS30')  #.... all the way to the 30-year rate

# Turn  FRED codes to human readable variables
myvars <- c('3-Month Treasury Bill',
            '6-Month Treasury Bill',
            '1-Year Treasury Rate',
            '2-Year Treasury Rate',
            '3-Year Treasury Rate',
            '5-Year Treasury Rate',
            '7-Year Treasury Rate',
            '10-Year Treasury Rate',
            '20-Year Treasury Rate',
            '30-Year Treasury Rate')

maturity <- c('3m', '6m', '1y', '2y','3y','5y','7y','10y','20y','30y')

# by default R will sort these maturities alphabetically; but since we want
# to keep them in that exact order, we recast maturity as a factor 
# or categorical variable, with the levels defined as we want
maturity <- factor(maturity, levels = maturity)

# Create a lookup dataset
mylookup<-data.frame(symbol=tickers,var=myvars, maturity=maturity)
# Take a look:
mylookup %>% 
  knitr::kable()
symbol var maturity
TB3MS 3-Month Treasury Bill 3m
TB6MS 6-Month Treasury Bill 6m
GS1 1-Year Treasury Rate 1y
GS2 2-Year Treasury Rate 2y
GS3 3-Year Treasury Rate 3y
GS5 5-Year Treasury Rate 5y
GS7 7-Year Treasury Rate 7y
GS10 10-Year Treasury Rate 10y
GS20 20-Year Treasury Rate 20y
GS30 30-Year Treasury Rate 30y
df <- tickers %>% tidyquant::tq_get(get="economic.data", 
                   from="1960-01-01")   # start from January 1960

glimpse(df)
## Rows: 6,774
## Columns: 3
## $ symbol <chr> "TB3MS", "TB3MS", "TB3MS", "TB3MS", "TB3MS", "TB3MS", "TB3MS",…
## $ date   <date> 1960-01-01, 1960-02-01, 1960-03-01, 1960-04-01, 1960-05-01, 1…
## $ price  <dbl> 4.35, 3.96, 3.31, 3.23, 3.29, 2.46, 2.30, 2.30, 2.48, 2.30, 2.…

Our dataframe df has three columns (variables):

  • symbol: the FRED database ticker symbol
  • date: already a date object
  • price: the actual yield on that date

The first thing would be to join this dataframe df with the dataframe mylookup so we have a more readable version of maturities, durations, etc.

yield_curve <-left_join(df,mylookup,by="symbol") 

4.1 Plotting the yield curve

This may seem long but it should be easy to produce the following three plots

4.1.1 Yields on US rates by duration since 1960

knitr::include_graphics(here::here("images", "yield_curve1.png"), error = FALSE)

yield_curve %>%  
  ggplot(aes(x = date, y = price, color = var)) +
  geom_line() +
  facet_wrap(~ordered(yield_curve$var, myvars), ncol=2) +
  guides(colour=FALSE) +
  ylab("%") +
  xlab("") +
  labs(title="Yields on U.S. Treasury rates since 1960",
       caption="Source: St. Louis Federal Reserve Economics Database (FRED)")

4.1.2 Monthly yields on US rates by duration since 1999 on a year-by-year basis

knitr::include_graphics(here::here("images", "yield_curve2.png"), error = FALSE)

yield_curve %>% 
  filter(year(date) >= 1999) %>% 
  ggplot(aes(x = maturity, y = price, color = year(date), group=month(date))) +
  geom_line() +
  facet_wrap(~year(date), ncol=4) +
  guides(colour=FALSE) +
  ylab("Yield (%)") +
  xlab("Maturity") +
  labs(title="US Yield Curve",
       caption="Source: St. Louis Federal Reserve Economics Database (FRED)")

4.1.3 3-month and 10-year yields since 1999

knitr::include_graphics(here::here("images", "yield_curve3.png"), error = FALSE)

yield_curve %>% 
  filter(year(date) >= 1999,
         maturity %in% c("3m", "10y")) %>% 
  ggplot(aes(x = date, y = price, color = var)) +
  geom_line() +
  ylab("%") +
  guides(color=guide_legend(title="")) +
  labs(title="Yields on 3-month and 10-year US Treasury rates since 1999",
       caption="Source: St. Louis Federal Reserve Economics Database (FRED)")

According to Wikipedia’s list of recession in the United States, since 1999 there have been two recession in the US: between Mar 2001–Nov 2001 and between Dec 2007–June 2009. Does the yield curve seem to flatten before these recessions? Can a yield curve flattening really mean a recession is coming in the US? Since 1999, when did short-term (3 months) yield more than longer term (10 years) debt?

The yield curve does flatten before each major recession. We can see in the graphs that the flattening of the yield curve does warn before each recession hits the US.We also notice that short term yielded more than long term during recession times, likely due to people being worried to invest in longer futures due to uncertainty.

Besides calculating the spread (10year - 3months), there are a few things we need to do to produce our final plot

  1. Setup data for US recessions
  2. Superimpose recessions as the grey areas in our plot
  3. Plot the spread between 30 years and 3 months as a blue/red ribbon, based on whether the spread is positive (blue) or negative(red)
  • For the first, the code below creates a dataframe with all US recessions since 1946
# get US recession dates after 1946 from Wikipedia 
# https://en.wikipedia.org/wiki/List_of_recessions_in_the_United_States

recessions <- tibble(
  from = c("1948-11-01", "1953-07-01", "1957-08-01", "1960-04-01", "1969-12-01", "1973-11-01", "1980-01-01","1981-07-01", "1990-07-01", "2001-03-01", "2007-12-01"),  
  to = c("1949-10-01", "1954-05-01", "1958-04-01", "1961-02-01", "1970-11-01", "1975-03-01", "1980-07-01", "1982-11-01", "1991-03-01", "2001-11-01", "2009-06-01") 
  )  %>% 
  mutate(From = ymd(from), 
         To=ymd(to),
         duration_days = To-From)

recessions
## # A tibble: 11 x 5
##    from       to         From       To         duration_days
##    <chr>      <chr>      <date>     <date>     <drtn>       
##  1 1948-11-01 1949-10-01 1948-11-01 1949-10-01 334 days     
##  2 1953-07-01 1954-05-01 1953-07-01 1954-05-01 304 days     
##  3 1957-08-01 1958-04-01 1957-08-01 1958-04-01 243 days     
##  4 1960-04-01 1961-02-01 1960-04-01 1961-02-01 306 days     
##  5 1969-12-01 1970-11-01 1969-12-01 1970-11-01 335 days     
##  6 1973-11-01 1975-03-01 1973-11-01 1975-03-01 485 days     
##  7 1980-01-01 1980-07-01 1980-01-01 1980-07-01 182 days     
##  8 1981-07-01 1982-11-01 1981-07-01 1982-11-01 488 days     
##  9 1990-07-01 1991-03-01 1990-07-01 1991-03-01 243 days     
## 10 2001-03-01 2001-11-01 2001-03-01 2001-11-01 245 days     
## 11 2007-12-01 2009-06-01 2007-12-01 2009-06-01 548 days
  • To add the grey shaded areas corresponding to recessions, we use geom_rect()
  • to colour the ribbons blue/red we must see whether the spread is positive or negative and then use geom_ribbon(). You should be familiar with this from last week’s homework on the excess weekly/monthly rentals of Santander Bikes in London.
diff_df <- yield_curve %>%
  select(date, price, maturity) %>% 
   pivot_wider(
     names_from = maturity,
     values_from = price
     ) %>% 
  mutate(diff = `10y` - `3m`)

recessions_df <- data.frame(xmin = as.Date(recessions$from),
                              xmax = as.Date(recessions$to),
                              ymin = -Inf,
                              ymax = Inf) %>% 
  filter(year(xmin) > 1960, 
         year(xmax) > 1960)

diff_df %>% 
  ggplot(aes(x=date, y=diff)) +
  geom_line() +
  geom_rug(aes(color=ifelse(diff<=0 ,"<=0 ",">0")),sides="b",show.legend = FALSE) +
  ylab("Difference in 10 year - 3 month yield (%)") + 
  labs(title="Yield Curve inversion: 10 year minus 3 month U.S. Treasury rates",
       subtitle = "Difference in % points, monthly averages\nShaded ares correspond to recessions.") +
  xlab("") +
  geom_hline(yintercept=0) +
    theme(legend.position=NULL,
        plot.caption=element_text(hjust=0),
        plot.subtitle=element_text(face="italic"),
        plot.title=element_text(size=12,face="bold")) +
  geom_ribbon(aes(x=date, y=diff, ymin=pmin(diff_df$diff,0), ymax=0), 
              fill="red", col="red", alpha=0.2) +
  geom_ribbon(aes(x=date, y=diff, ymin=0, ymax=pmax(diff_df$diff,0)), 
              fill="green", col="green", alpha=0.2) +
  geom_rect(data=recessions_df, inherit.aes=F,
            mapping=aes(xmin=recessions_df$xmin, xmax=recessions_df$xmax, ymin=recessions_df$ymin, ymax=recessions_df$ymax), color='grey', alpha=0.2)

5 Challenge 2:GDP components over time and among countries

At the risk of oversimplifying things, the main components of gross domestic product, GDP are personal consumption (C), business investment (I), government spending (G) and net exports (exports - imports). You can read more about GDP and the different approaches in calculating at the Wikipedia GDP page.

The GDP data we will look at is from the United Nations’ National Accounts Main Aggregates Database, which contains estimates of total GDP and its components for all countries from 1970 to today. We will look at how GDP and its components have changed over time, and compare different countries and how much each component contributes to that country’s GDP. The file we will work with is GDP and its breakdown at constant 2010 prices in US Dollars and it has already been saved in the Data directory. Have a look at the Excel file to see how it is structured and organised

UN_GDP_data  <-  read_excel(here::here("data", "Download-GDPconstant-USD-countries.xls"), # Excel filename
                sheet="Download-GDPconstant-USD-countr", # Sheet name
                skip=2) # Number of rows to skip

The first thing you need to do is to tidy the data, as it is in wide format and you must make it into long, tidy format. Please express all figures in billions (divide values by 1e9, or \(10^9\)), and you want to rename the indicators into something shorter.

make sure you remove eval=FALSE from the next chunk of R code– I have it there so I could knit the document

tidy_GDP_data  <-  UN_GDP_data %>% 
  pivot_longer(4:51) %>% 
  mutate(value = value/1e9) %>% 
  summarise(
    ID= CountryID,
    country = Country,
    date = name,
    name = IndicatorName,
    value = value
  ) %>% 
  mutate( 
          name = case_when(
          name == "Final consumption expenditure" ~ "Final Exp",
          name == "Household consumption expenditure (including Non-profit institutions serving households)" ~ "Household Exp",
          name == "General government final consumption expenditure" ~ "Govt Exp",
          name == "Gross capital formation" ~ "Gross capital formation",
          name == "Gross fixed capital formation (including Acquisitions less disposals of valuables)" ~ "GFCF",
          name == "Exports of goods and services" ~ "Exports",
          name == "Imports of goods and services" ~ "Imports",
          name == "Agriculture, hunting, forestry, fishing (ISIC A-B)" ~ "ISIC A-B",
          name == "Mining, Manufacturing, Utilities (ISIC C-E)" ~ "ISIC C-E",
          name == "Manufacturing (ISIC D)" ~ "ISIC D",
          name == "Construction (ISIC F)" ~ "ISIC F",
          name == "Wholesale, retail trade, restaurants and hotels (ISIC G-H)" ~ "Retail/Tourism",
          name == "Transport, storage and communication (ISIC I)" ~ "ISIC I",
          name == "Other Activities (ISIC J-P)" ~ "ISIC J-P",
          name == "Total Value Added" ~ "Total Value",
          name == "Gross Domestic Product (GDP)" ~ "GDP"
  ))

glimpse(tidy_GDP_data)
## Rows: 176,880
## Columns: 5
## $ ID      <dbl> 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4…
## $ country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "…
## $ date    <chr> "1970", "1971", "1972", "1973", "1974", "1975", "1976", "1977…
## $ name    <chr> "Final Exp", "Final Exp", "Final Exp", "Final Exp", "Final Ex…
## $ value   <dbl> 5.56, 5.33, 5.20, 5.75, 6.15, 6.32, 6.37, 6.90, 7.09, 6.92, 6…
# Let us compare GDP components for these 3 countries
country_list <- c("United States","India", "Germany")

First, can you produce this plot?

knitr::include_graphics(here::here("images", "gdp1.png"), error = FALSE)

indicator_list = c("Gross capital formation", "Exports", "Govt Exp", "Household Exp", "Imports")

tidy_GDP_data %>% 
  filter(country %in% country_list,
         name %in% indicator_list) %>% 
  ggplot(aes(x=as.numeric(date), y = value, color=name)) +
  facet_wrap(~country) +
  geom_line() +
  labs(title="GDP components over time",
       subtitle = "In constant 2010 USD") +
  ylab("Billion US$") +
  xlab("") +
  scale_color_discrete(name="Components of GDP")+
  scale_x_continuous(n.breaks=4)

Secondly, recall that GDP is the sum of Household Expenditure (Consumption C), Gross Capital Formation (business investment I), Government Expenditure (G) and Net Exports (exports - imports). Even though there is an indicator Gross Domestic Product (GDP) in your dataframe, I would like you to calculate it given its components discussed above.

calc_GDP <- tidy_GDP_data %>% 
  filter(
    name %in% c("Gross capital formation", "Govt Exp", "Household Exp", "Exports", "Imports")
  ) %>% 
  pivot_wider(
    names_from = name,
    values_from = value
  ) %>% 
  mutate(
    calc_GDP = `Household Exp` + `Gross capital formation` + `Govt Exp` + Exports - Imports
  )

perc_diff_gdp <- calc_GDP$calc_GDP / tidy_GDP_data %>% filter(name == "GDP") %>% select(value) - 1

plot(perc_diff_gdp)

What is this last chart telling you? Can you explain in a couple of paragraphs the different dynamic among these three countries?

Govenment Expenditure In terms of the US, we can see that the government expenditure spiked during recessions, but overall has been on a downward trend since the 70’s. For Germany and India, the government expenditure remained rather stable, although India shows the lowest percentage and Germany the highest. This shows the state of welfare for both countries, Germany placing high importance on universal healthcare and free education. Gross capital formation/Net Exports A common trend across all 3 countries are when gross capital formation is high, net exports tends to be down. This is due to the need to invest, which results in consumption from abroad. India has been showing the highest increase in gross capital formation since the 2000s, explaining the country’s expansion in the last 20 years.

Household Expenditure These seem to remain relatively constant across all 3 countries. Key trends to note are India’s decreasing household expenditure since 1990 and the US’s slightly increasing expenditure, despite the housing crisis of 2007/2008.

If you want to, please change country_list <- c("United States","India", "Germany") to include your own country and compare it with any two other countries you like

our_countires <- tidy_GDP_data %>% 
   filter(country %in% c("China", "Hungary", "Austria", "Spain", "United States"),
          name %in% c("Govt Exp", "Gross capital formation", "Govt Exp", "Exports", "Imports", "Household Exp")) %>% 
   pivot_wider(names_from = name,
               values_from = value) %>%
  mutate(`Net Export` = Exports - Imports)

our_countires_date <- our_countires %>% 
  mutate(
    GDP = `Net Export` + `Govt Exp` + `Gross capital formation` + `Household Exp`
  ) %>% 
  group_by(country) %>% 
  arrange(date, .by_group = TRUE) %>%
  summarise(
    date = date,
    `Net Export` = `Net Export` / GDP,
    `Govt Exp` =`Govt Exp` / GDP,
    `Gross capital formation` = `Gross capital formation` / GDP,
    `Household Exp` = `Household Exp` / GDP) %>% 
  pivot_longer(3:6)

ggplot(our_countires_date, aes(x = date, y = value, group = name, color = name)) +
  geom_line(size = 1) +
  labs(title = "GDP and its breakdown at constant 2010 prices in US Dollars", 
       y = "proportion", 
       caption = "Source: United Nations, https://unstats.un.org/unsd/snaama/Downloads") +
  facet_wrap(~country) +
  scale_x_discrete(breaks = scales::pretty_breaks(3)) +
  scale_y_continuous(labels = scales ::percent) +
  scale_color_discrete(labels = c("Gross capital formation", 
                                 "Government expenditure",
                                 "Household expenditure",
                                 "Net Exports")) +
  theme(panel.grid = element_line(colour = "#f0f0f0"),
        strip.background = element_rect(colour = "black", size = 0.5, fill = "grey80"),
        panel.background = element_rect(colour = "black", size=0.5, fill = NA),
        legend.key = element_rect(colour = "transparent", fill = "transparent"),
        legend.title=element_blank(),
        axis.title.x = element_blank(),
        plot.caption = element_text(hjust = 1,size = 8))

6 Deliverables

There is a lot of explanatory text, comments, etc. You do not need these, so delete them and produce a stand-alone document that you could share with someone. Knit the edited and completed R Markdown file as an HTML document (use the “Knit” button at the top of the script editor window) and upload it to Canvas.

7 Details

  • Who did you collaborate with: Daniel, Tao, Balint, Eudald, Mayssa.
  • Approximately how much time did you spend on this problem set: 6 hours
  • What, if anything, gave you the most trouble: Challenges 1 & 2

Please seek out help when you need it, and remember the 15-minute rule. You know enough R (and have enough examples of code from class, your previous homeworks, and your readings) to be able to do this. If you get stuck, ask for help from others, post a question on Slack– and remember that I am here to help too!

As a true test to yourself, do you understand the code you submitted and are you able to explain it to someone else?

Yes.

8 Rubric

Check minus (1/5): Displays minimal effort. Doesn’t complete all components. Code is poorly written and not documented. Uses the same type of plot for each graph, or doesn’t use plots appropriate for the variables being analyzed.

Check (3/5): Solid effort. Hits all the elements. No clear mistakes. Easy to follow (both the code and the output).

Check plus (5/5): Finished all components of the assignment correctly and addressed both challenges. Code is well-documented (both self-documented and with additional comments as necessary). Used tidyverse, instead of base R. Graphs and tables are properly labelled. Analysis is clear and easy to follow, either because graphs are labeled clearly or you’ve written additional text to describe how you interpret the output.