Performance Attribution Time Series Visualization
Part V: Visualization
The previous part finalized the database function that produces cumulative Cariño scaled Brinson-Fachler performance attribution time series. It is quite natural to visualize these results using charts, and that is what I am developing in this part. For wider access, and easier interpretation, I am using MS Excel to get the data and draw the charts. As this blog is really about SQL, I am not going to go into detail on how to build such an Excel workbook - this is intended more as a tool to actually look at the results provided by the database. Perhaps a more modern tool for visualization would be MS Power BI (or some other BI tool).
The Excel file can be found in my GitHub repository, alongside the SQL scripts used in the preceding parts. Obviously, you need a relatively recent version of MS Excel to use it.
Making the Connection
The Excel workbook already has the required data queries, but it needs to connect to the correct database. To modify the connection, go to one of the data sheets, e.g. PerfTest Attribution sheet, and select one of the data cells. From the Query ribbon, select Edit. This should open the Power Query Editor. There, from the Home ribbon, select Data source settings. There should be only one data source (besides "Current Workbook") - select it and click Change Source. You should get a window where you can enter the Server and Database for the connection. These should be the same you get in SQL Server Management Studio, when you are able to run the queries in parts I-IV, and run select @@servername, db_name();. Click OK, and close all the Power Query Editor windows; hopefully, you are all set. Test by selecting Refresh All from the Data ribbon.
Serious Series
Now, there are some settings behind this chart. The most important are in the upper left corner of the sheet:
The Portfolio, Benchmark, and Structure must be selected using drop down menus, while the dates can be entered freely. Of course, as long as the data only cover Sept 29 to Oct 31, 2023, there's no point trying to go outside of that range.
The second chart shows the selected element of results in more detail. Since selection seems to be the problem above, let's look at that in more detail:
As we are using the Asset Class structure, the first level of the structure are the asset classes Bond, Cash, and Equity. Its hardly surprising that the selection effect comes almost entirely from Equity, as it is the biggest asset class and the also the most volatile. If you wish to check the weights, select Portfolio weight or Benchmark weight in the drop down menu in cell P1 to change the chart to show those. Other items that can be looked into are the returns, return contributions, and attribution effects.
Going Deeper
So - it's equities where something went wrong. In fact, our return in Equity was -2.48 % when in benchmark it was -0.66 %, an underperformance of -1.82 %. But what caused it? Selecting 1 - Equity in cell B23 drop down menu changes the charts. Looking at level 1 - Equity attribution, we can see it was both the allocation effect, cumulatively -0.77 %, and selection, -1.36 % (interaction was +0.31 %):
In the second chart, we can have a closer look at both of these effects. In allocation, it looks like everything went wrong. If you have a look at benchmark returns and weight differences (using cell P1 drop down menu, not shown here), you can see that US equities was the best performing equity region, while it was massively underweighted (~44 % in portfolio, ~72 % in benchmark), and all the other regions were then overweighted.

What about selection within equities? It seems we were very close to the benchmark in all regions, except in the US. Actually, US underperformance explains -1.23 % of the -1.36 % selection effect within equities.
Once more, let's dig deeper. Selecting 2 - Equity - US in cell B23 drop down menu shows the Sectors within US equities - only two in the test data, Consumer and Services. This time, allocation between these two sectors was positive +0.70 %, but selection quite negative -3.61 % and interaction +1.18 %, summing up to -1.73 %, the difference between portfolio return (-1.51 %) and benchmark return (+0.22 %).
So, allocation was successful, overweighting Services, which had a better benchmark return. Selection was negative in both sectors, -2.51 % in Consumer and -1.10 % in Services (not shown in charts). Again, it is possible to delve deeper and look into e.g. the Consumer sector. Now we are at the Instrument level, so there's really no selection effect, as there is nothing to select from within just the one instrument. Interaction is thus also zero. (A small selection effect may arise from trading, and a very, very small selection effect from rounding the market values.) The differences of weights in individual stocks in the portfolio and benchmark explains (almost) all of the return difference, and is attributed to allocation effect at this level.
Explaining our rather massive underperformance of -7.92 % in Consumer is the fact that we only invested in two of the four Instruments in the benchmark: Untealt and Visla. While Untealt was the best performing of the four, and the overweight gave a positive allocation effect of +1.87 %, Visla had a terrible month with a -20 % return and our big overweight resulted in a negative allocation effect of -6.39 %.
The other two, Billy and Imazing, also had positive returns in the benchmark (and better than total sector benchmark), so having zero position in the portfolio resulted in negative allocation effect for them, too.
Final thoughts
This concludes the series, which started by setting up a test database in MS SQL Server for portfolio and benchmark instruments, transactions, and prices, and then built the daily positions, weights and returns from that using database views and user defined functions. Performance attribution calculations were added using another set of functions. Finally, this part served as a simple visualization of the results.
Comments
Post a Comment