Portfolios with Select Sector ETFs in R

This post is motivated by the notion that the market will outperform most individual asset managers after fees in the long run. I propose a more efficient alternative to holding an SP Index Fund that has less risk and only mildly increases complexity.

This post is a work in progress. All code for this exercise will be available soon on my GitHub and linked here.

Warren Buffett States:

My advice … couldn’t be more simple: Put 10% of the cash in short-term government bonds and 90% in a very low-cost S&P 500 index fund. (I suggest Vanguard’s.) I believe the trust’s long-term results from this policy will be superior to those attained by most investors — whether pension funds, institutions or individuals — who employ high-fee managers.

As computing power becomes cheaper and financial data is more accessible, outperforming the market, in the long run, is becoming increasingly difficult. As discussed in Intelligent Investor, Ben Graham suggests that an investor not willing to dedicate significant time and energy to security analysis is better off investing in an index fund. However, holding the market alone can leave much to be desired.
Conveniently, the S&P is broken out into sectors:
  • Consumer Discretionary (XLY)
  • Utilities (XLU)
  • Consumer Staples (XLP)
  • Energy (XLE)
  • Financials (XLF)
  • Healthcare (XLV)
  • Industrials (XLI)
  • Materials (XLB)
  • Real Estate (XLRE) – Not included in this analysis
  • Technology (XLK)

For more information on the assets, please see the Select Sector SPDR ETF site that has lots of information and a number of widgets.

These sectors provide exposure to a subset of the greater index. At any given moment, one sector could be outperforming the others. I’d like to capitalize on that while still poised to capture the general trend of the market, and not put myself at unnecessary risk.

I will begin the exercise by sharing an analysis of the returns of all sectors to be included in the portfolio. I will continue with naive portfolio construction, portfolio construction with a DCC GARCH model, and finally portfolio construction with a recurrent neural network.

Sector Analysis

For this section, I utilize the packages below. Notable packages are,alphavantager, a package that accesses the Alpha Vantage free stock API (get your free API key at the link),  tidyquant, a package that brings financial analysis into the tidyverse (see here), and grid, a package that facilitates writing tables to the R graphics device.

require(alphavantager)
require(tidyquant)
require(PerformanceAnalytics)
require(tseries)
require(stats)
require(moments)
require(car)
require(lattice)
require(latticeExtra)
require(graphics)
require(corrplot)
require(gridExtra)
require(grid)
require(reshape2)
require(ggplot2)

From the Alpha Vantage API, I grab monthly data for the 9 Sector ETFs used in this analysis, as well as “the Market”, ticker SPY. I feed a list tickers into my function get_alphavan_stocks and get a named list of dataframes back. From this list, I’ll be grabbing the close price for each month and the monthly returns. I put them in objects list_sectors_close and list_sectors_rets respectively. I take a look at the close price for all sectors over time using the code below.

#Plot All Close Prices
sector_close_prices <- do.call('cbind', list_sectors_close)
sector_close_prices <- cbind(list_sectors[[1]]$timestamp, sector_close_prices)
colnames(sector_close_prices) <- c("Date", "XLY", "XLF", "XLE", "XLU", "XLK", "XLP","XLB", "XLV", "XLI")
melted_sector_close <- melt(sector_close_prices, id = "Date")
ggplot(data = melted_sector_close, aes(x = Date, y = value, group = variable, color = variable)) + geom_line() +scale_x_date(date_breaks = "5 year") + ylab("Close Price") + ggtitle("Close Prices of Sectors\n Monthly Feb. 2000 to March 2018")

Screenshot 2018-04-05 22.07.07.png

I then take the returns of all sector ETFs (and the market) and feed them into a function to create a PDF and examine the monthly returns of each ticker. I’ll examine the outputs of this function using XLY as an example.

First, I create a simple time series plot of the monthly returns. We can see high volatility periods around 2001 and 2008, as expected.

Screenshot 2018-04-03 23.53.29

Second, we can use hist to create a density histogram of the monthly returns bucketed into 5% intervals and plot the kernel density estimation of the monthly returns on top. We see that these returns seem to be skewed left with a mean greater than zero. This makes sense, as the value of the market increases in the long run but is prone to dramatic corrections.

Screenshot 2018-04-03 23.54.33.png

I use the grid and gridExtra packages to output some information about the moments and certain quantiles. Shown are mean, variance, standard deviation, skewness, and kurtosis. The negative skewness and an excess kurtosis indicate that this data might not fit a normal distribution. The Jarque Bera Test, a goodness-of-fit test that will let us know if the data seems to match a normal distribution, confirm it.

Screenshot 2018-04-03 23.55.04.png

Screenshot 2018-04-03 23.54.59.png

 

 

 

 

 

 

Lastly, we can use the car package to create Quantile-Quantile plots, or QQplots. This is another view of the distribution of the sector’s returns. These diagrams show the distribution of one dataset plotted against another. We see the points seem to form a steeper line than the red 45° line, indicating that the stock returns have fatter tails than the normal distribution. In addition we have convenient envelopes that reflect the 95% confidence interval of the normal distribution. It’s close in the middle, but the tails fall out of bounds. The PDF function additionally outputs a QQplot that compares the data with a Student-t distribution that has 3 degrees of freedom, and another that has 9. This sector’s returns seem to fall between the two.

Screenshot 2018-04-03 23.58.45.png

Having the above information about the distribution of returns for a given sector helps us understand sector returns individually. However, it’s important to also examine sector returns in context with other sectors. An easy way to quickly examine the risk-return tradeoff in sectors is a plot of means vs standard deviations. I’ve included SPY as well. Note that these means and standard deviations are calculated based on the entire series of monthly returns from Feb 2000 to April 5th 2018.

Screenshot 2018-04-05 22.03.08.png

Unsurprisingly, consumer staples (XLP) has the lowest standard deviation of the bunch. It’s followed closely by healthcare (XLV), the market (SPY) and utilities (XLU). The leader in returns is consumer discretionary (XLY). Interesting are financials (XLF) and technology (XLK). These points don’t fit the up-and-to-the-right imaginary line that would indicate a proper risk and return trade-off. These ETFs seem to be riskier with a lower return. A large impact on their showing here is likely because of the 2001 dotcom crash and the 2008 financial crisis being included in the dataset. I’m sure the results look different at a different snapshot in time.

In addition, we can look at correlations between sector ETFs.

Screenshot 2018-04-05 15.43.41.png

Technology and Utilities seem to be the least correlated pair. In fact, a quick glance shows that Utilities appears to be the least correlated with the rest of the group.

Naive Portfolio Construction

After analyzing the sectors, let’s examine constructing optimal portfolios with them. We will be constructing global minimum variance (GMV) and targeted returns (TRET) portfolios. These portfolios are considered naive as they depend on a return covariance matrix of past data even though we have no assurance that historical covariance will reflect the future.

The GMV portfolio will provide asset weights that, based on a covariance matrix \Sigma derived from a period of returns, will provide the minimum possible variance. Given a portfolio with n assets and a vector of weights \bold{m} = (m_1,...,m_n)', we solve the constrained minimization problem below.

\large{\min\limits_{\bold{m}} \sigma^2_p,m = \bold{m}'\bold{\Sigma}\bold{m} \hspace{.01\textwidth} s.t. \hspace{.01\textwidth} \bold{m}'\bold{1} = 1 }

Using this to constuct a lagrangian and solving for \bold{m} we arrive at the equation below.

 \bold{m} = \frac{\bold{\Sigma}^{-1}\bold{1}}{\bold{1}'\bold{\Sigma}^{-1}\bold{1}}

For this experiment, consider all \bold{\Sigma} based on 5 years of monthly historical data for the 9 sector ETFs. Again, the result of the equation above, \bold{m}, is a vector of weights corresponding to each asset.

The TRET portfolio attempts to minimize variance while achieving a targeted level of returns. We are solving the same minimization problem as above with an additional constraint.

\large{\min\limits_{\bold{m}} \sigma^2_p,m = \bold{m}'\bold{\Sigma}\bold{m} \hspace{.01\textwidth} s.t. \hspace{.01\textwidth}  \bold{m}'\bold{1} = 1\hspace{.01\textwidth}and\hspace{.01\textwidth}  \bold{m}'\bold{\mu} = x }

In the equation above, \bold{\mu} is a vector containing the means of all assets in the portfolio in the same period as the , x is is the return value being targeted. From this equation we can use matrix algebra to derive an equation that will output optimal weights. This equation is:

\bold{z} = \bold{A}^{-1}\bold{b}

Where \bold{z} = [\bold{m}, \lambda_1, \lambda_2], \bold{b} = [\bold{0}, \mu_p, 1], and

\bold{A} = \left( \begin{matrix} 2\bold{\Sigma} & \mu & \bold{1}\\ \mu & 0 & 0\\ \bold{1} & 0 & 0 \end{matrix} \right)

With these equations, we can construct the efficient frontier. The efficient frontier line  represents what the “limits” are of portfolios you can construct of a given set of assets. It tells you what amount of return you may achieve for a given amount of risk, or vice versa. Below is the efficient frontier for portfolios containing the nine asset ETFs.

Screenshot 2018-04-05 17.53.36.png

This line was constructed by first calculating \bold{\Sigma} for all assets over the entire time horizon of data. With \bold{\Sigma} we calculate the leftmost point of the frontier by calculating the GMV portfolio for the full length of the dataset and determine what level of returns comes with the smallest possible level of standard deviation. From there, we calculate TRET portfolios starting from the mean return of the GMV portfolio and slowly stepping upwards. Again, this line tells us the best mean returns per month from a portfolio bought in 2000 and held that we can achieve with a given level of risk. Important to note is that this line was calculated with one set of weights, it’s possible that we could surpass this line by adjusting portfolio weights intermittently throughout time.

Onto portfolio construction. Again, we calculate \Sigma based on 5 years of monthly historical data and construct a new optimal portfolio at the end of each month. We construct two TRET portfolios, one that targets the return of the best performing sector for the previous time period and one that targets a constant 1% return.

Screenshot 2018-04-09 18.18.42.pngScreenshot 2018-04-09 18.23.19.png

Screenshot 2018-04-09 18.23.04.pngScreenshot 2018-04-09 18.23.13.png

As we can see above, only considering returns, all portfolios beat the market. However, none of them achieve positive Jensen’s alpha. We do see that the GMV and TRET1 portfolios are able to beat the market with a lower variance. The “success” of these portfolios relies on what your goals are as a portfolio manager. Below, I plot the cumulative returns of these portfolios and SPY over time.

Screenshot 2018-04-10 13.51.57.png

The three portfolios and the market track closely, but notable is the severity of the drop in 2008. All three portfolios managed to lose less than SPY. On the other hand, they didn’t gain anything from avoiding the plunge, as SPY’s sharp rebound immediately caught up. Putting the portfolios on the plot with the efficient frontier from earlier, we can see that the GMV portfolio gets quite close. The other portfolios seem to only add risk without adding any return.

Screenshot 2018-04-10 15.03.31.png

In conclusion, a naive GMV portfolio performs better than the SPY after monthly re-weights. The naive TRET and TRET1 portfolios do as well but stray farther from the efficient frontier.

DCC GARCH Portfolio Construction

Instead of assuming that next period’s \bold{\Sigma} will remain the same, in this section we will attempt to forecast next period’s \bold{\Sigma} in order to obtain more optimal weights for the coming period. To do this we will use a modification of a GARCH model, called the DCC GARCH (Dynamic Conditional Correlation). GARCH models are a family of models that help us model variance in a series of data. The general functional form can be represented as

\sigma^{d}_{t} = \alpha_0 + \sum_{i=1}^p \alpha_i \lvert x_{t-i} \rvert^d + \sum_{j=1}^q \beta_j \sigma_{t-j}^d

Where x_t follows a Gaussian distribution with mean 0 and variance \sigma_t^2.

The DCC GARCH model can help us understand how different series of data move relative to each other. After modeling asset returns, one can use it to forecast a covariance matrix for the next period. Our hope is that this forecasted covariance matrix is a more accurate prediction of the future than naively assuming that the covariance matrix will remain the same from one period to the next.

I plan to add latex equations to clarify how to get to a forecasted \bold{\Sigma} from return data. For now, see the R code I use at each timestep i below.


selection <- sector_returns[i:(estimation_period+i-1),2:10]
selection_means <- colMeans(selection)
selection_mean[i,] <- selection_means
selection_dm <- selection - matrix(colMeans(selection), estimation_period , n_assets, byrow = T)
a_step <- diag(cov(selection))
A_step <- diag(rep(.3,n_assets))
B_step <- diag(rep(0.75,n_assets))
DCC_parameters_step <- c(0.01,0.98)
DCC_regression_step <- dcc.estimation(inia = a_step, iniA = A_step, iniB = B_step, ini.dcc = DCC_parameters_step, dvar = selection_dm, model = "diagonal")
var_step <- DCC_regression_step$h
Coeff.step <- DCC_regression_step$out[1,]
a_step <- matrix(Coeff.step[1:n_assets],n_assets,1)
A_step <- diag(Coeff.step[(n_assets+1):(2*n_assets)])
B_step <- diag(Coeff.step[(2*n_assets+1):(3*n_assets)])
phi_step <- Coeff.step[(3*n_assets+1):(3*n_assets+2)]
forecastH[i,] <- a_step+A_step%*%(as.numeric(selection[estimation_period,]^2))+B_step%*%t(t(var_step[estimation_period,]))
DCC_DCC_step <- matrix(DCC_regression_step$DCC[estimation_period,],n_assets, n_assets)
tildeE <- matrix(as.numeric(selection[estimation_period,]/sqrt(var_step[estimation_period,])),n_assets,1)
barR <- cor(selection)
Q_step <- (1-phi_step[1]-phi_step[2])*barR+phi_step[1]*(tildeE%*%t(tildeE))+phi_step[2]*DCC_DCC_step
#R t+1
RR <- solve(diag(sqrt(diag(Q_step))))%*%Q_step%*%solve(diag(sqrt(diag(Q_step))))
sigma[[i]] <- diag(sqrt(forecastH[i,]))%*%RR%*%diag(sqrt(forecastH[i,]))

Unfortunately, these seem to perform only slightly better than their naive counterparts. Below, you can see the GARCH portfolios with their naive counterparts. (Note that colors have changed from the previous plot)

Screenshot 2018-04-12 00.23.18.png

Screenshot 2018-04-12 00.24.42.png

We can also examine the model’s next-period estimates for asset variance. By looking at the diagonals in the forecasted \bold{\Sigma}‘s we can see asset variance, and compare the forecasts to actual asset variance over time in the data.

Screenshot 2018-04-12 00.14.11.png

I notice two things here, first is that the forecasts (red) can be improved. Second, this model tends to do well when shifts are gradual and steady. Note how many of the forecasts tend to converge towards the end (low volatility period). After looking at these forecasts for sector variance, I wanted to improve the overall quality of the forecasts. It’s a fair assumption that more accurate forecasts of \bold{\Sigma} can drive better returns and less risk. For this task, I’ve chosen a recurrent neural network model.

Recurrent Neural Network Portfolio Construction

A common difficulty faced when dealing with neural networks is obtaining a suitable amount of data. Neural networks are data-hungry and prone to overfitting. Because of this, I had to take a step back from the problem at hand. I can’t have enough data to train a neural network that can output optimal portfolio weights for the next period for these sector ETF’s only. Instead, I must attempt to create a neural network that can forecast the next period covariance matrix for a portfolio of n assets. By abstracting away the specifics of what the precise data is I might be able to create a model that hasn’t overfit the sector data and handles the task. I approach this problem by obtaining price data for a large number of stocks, sampling n of them to create a large number of portfolios, and then generating a large number of time-series of \bold{\Sigma}‘s.

With an input as a time series of \bold{\Sigma}‘s, I attempt to predict the next \bold{\Sigma} to generate optimal weights with it. As I’ve found out, this is no easy task. More than anything, the network learns to replicate the last value of the covariance matrix. As a result, a network that tries to forecast the next value in a sequence of covariance matrices is at its core performs no better than a naive portfolio with a huge level of added complexity.

Now, since I am already attempting to internalize a GARCH model in the neural network, I consider attempting to internalize the function above that derives weights. So, instead of predicting the next \bold{\Sigma}, I attempt to forecast the next optimal set of weights for the GMV portfolio. At the moment, this forecast is no better than an equal weight portfolio, which tells me that this network isn’t learning either. I believe I need to add more information to the training data. Perhaps adding more input data (SPY Returns, past sector performance, the ranking of stock performance within each portfolio, the ranking of stock correlations, anything that would help the model understand what relative movements are happening and why) can help these predictions, but that’s an area of further research. I’ll continue to update this section if any developments are made.

 

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s