- AuthorPosts
Turning the knobs – Parameter Optimization

What exactly is “Range Sharp” and how it compares to “Annual Sharp” (given the same period of backtest)? From what I see it’s not simply Annual Sharp * number of years, but something different?

These are values found in the Optimizer window. Before running the optimizer you have to choose a chart range for the optimization in the main trader windows. You will set for example the “History range” of the graph to 2 years. This way the optimizer will run quite fast, because it will only optimize the parameters for the last 2 years. If you hover above the chart, then the optimizer will show you the results for a certain combination of lookback period (6-200 days) or volatility attenuator (0-10).

The “Range return” will show the portfolio return for the selected 2 years. Same for Range Sharp. Much more important is the Annual Return (=CAGR Compounded Annual Growth Rate) and the Annual Sharpe. Only with these annualized values it is possible to compare results.

By the way it may make sense to sometimes choose a chart range for the optimization which is smaller than the total available range of the ETFs. This way you can find strategy parameters which work better for a special market situation like for example the up and down sideways market we have since 2 years.Thanks for the answer. One important question: the color (or shade) is correlated with? which one: Annual Sharpe or Range Sharpe? It seems that it’s Range Sharpe which I find bit misleading, since – as you wrote – Annual Sharpe is more significant.

The shade is reflecting the Range Sharpe, the range which is optimized. Normally as Frank said, you would first optimize for a shorter range, and then extend the backtest once you find a stable range. This also helps to confirm the stability of the parameters.

Can you explain how the “volatility limit” in the parameter settings relates to the “volatility calculation” displayed in the summary window.

As an example, I have a 60 day lookback period with a volatility limit of 8.

When I set the history range to 2 months (60 days) and review the results in the summary window…. the volatility is 9.475

I must be wrong, but I thought the program was calculating volatility as the standard deviation of returns over the designated lookback period.

Wouldn’t this mean that both numbers should be the same?

Thanks for your help!

Hello Mitch,

thanks for your interest and the question. The volatility limit is based on historical volatility during the lookback period as you describe (annualized standard deviation). So whenever the historical volatility in the lookback period exceeds the limit, then the allocations for the next period are scaled down accordingly. This is a big assumption in the sense that past historical volatility is a prediction for the future – which obviously is not the case all the time.

The prediction error is what you see: Instead of exectly matching the historical 8%, the real result is slightly above with 9.5%. Keep in mind that the algo is scaling down allocations, it is not using any stop mechanism (which would be harmfull in our experience).

As I’ve been playing with QT, I’ve noticed that the optimizer sometimes will fall on a highly optimal set of values that are surrounded by suboptimal solutions. If the heat map were “all white”, then it indicates there is very little sensitivity of the strategy to the lookback period or volatility limit. If the heat map is checkered randomly, then it is very sensitive. I’ve seen mention of the optimized results being “curve fit” in such a case, when the optimal solution lands on a single “bright white” square in the middle of a checkered field of landmines, and of the solution being somewhat resilient when surrounded by similar shades of white and gray.

It might be interesting to program in an optimizer strategy that looks not for a single optimal value, but rather the largest cluster of stable values, because that represents a range of lookback periods and volatility limits that is resilient, meaning I would obtain similar results if I were to choose 95 or 100 or 90 days. That way, the recommendation is for the center-point of a cluster of stable values.

###### 1 user thanked author for this post.

That´s an interesting approach, guess such cluster algorithms are researched already, let´s see if we find an application. For me it´s always interesting to see how close these optimization algorithms are to image processing.

My personal approach to the optimization heatmap is less scientific though. My brain works very visual, so I need to see and “feel” how sensitive the performance is to parameter ranges. Very often I´m then either confident in the result, take them with a % “grain of salt” discount or even see other “white spots” which look more interesting than what QT has found.

Interested to hear more opinions on how we can further enhance this functionality.

I have 3 suggestions for optimization:

1) Prior to optimizing a strategy, offer an “optimization goal” that can be any of

a) maximize overall return

b) minimize volatility

c) etc…

with target thresholds, like 20%, 15%, 10%, 5%. It may be that the strategies and other settings already accomplish this though, so it might be redundant.2) Offer a checkbox that would search for optima with stable nearby values (a large hot spot on the heat map)

3) I suggest allowing the option to “continuously reoptimize”, with a selected optimization period, rather than force optimization over the currently-selected historical display range. Those 2 things should be separate, with the historical display for presentation only.

In my experiments with the software, I noticed that it is possible to optimize over a shorter time period and achieve better results than using the optima over a very long time period. I’d like to see a simulation over a long period of time hat uses a continuous optimization approach, so the lookback period is slowly changing as the underlying instruments go through different high/low volatility and returns regimes.

I’ve read several times that you can get false impression of stable results, when using lookback periods. Thing is the difference between 2 and 3 days lookback is 50%, between 100 and 101 days 1%. So the backtest results will be more stable around large values of lookback. So it’s said that one should rather use multiplies as lookback.

This is true, however also long lookback periods can be dangerous because the strategy performance may result of only a few ETF changes. A result which is based of many ETF changes is much less prone to such hazards. You can also see if a system is stable in the graphical optimizer window. If there are big performance jumps with only slightly different lookback periods, then the system is not really stable.

This may not be the correct forum to propose this, but do you think it would be worthwhile exploring a different kind of optimization? I think it would be very interesting to minimize the “time to recovery” as an alternative optimization strategy. In other words, any time the strategy achieves a “high water mark” in value, we measure the amount of time it requires for the strategy to exceed that level, regardless of volatility. Rather than minimizing volatility per se, we are minimizing the time to recovery for the strategy or portfolio. As soon as the previous high water mark is surpassed, a new one is set. The optimization would search for the lowest average TTR value in days.

I have no idea if it would produce good results, but it would most definitely be an interesting test. Thoughts?

###### 3 users thanked author for this post.

This may not be the correct forum to propose this, but do you think it would be worthwhile exploring a different kind of optimization? I think it would be very interesting to minimize the “time to recovery” as an alternative optimization strategy. In other words, any time the strategy achieves a “high water mark” in value, we measure the amount of time it requires for the strategy to exceed that level, regardless of volatility. Rather than minimizing volatility per se, we are minimizing the time to recovery for the strategy or portfolio. As soon as the previous high water mark is surpassed, a new one is set. The optimization would search for the lowest average TTR value in days.

I have no idea if it would produce good results, but it would most definitely be an interesting test. Thoughts?This is actually a great idea. I’d love to see the results for Ulcer Performance Ratio, which measures both drawdown and length of drawdown.

I’ll just drop this here: https://www.keyquant.com/Download/GetFile?Filename=%5CPublications%5CKeyQuant_WhitePaper_APT_Part1.pdf

Superinteresting in my opinion.

###### 1 user thanked author for this post.

- RichardT06/21/2017 at 5:37 pmPost count: 0
I agree – very interesting article by Key Quant.

For me, Max Drawdown is a far more important measure of risk than volatility because it directly relates to the pain I would feel and whether I would stick with the strategy rather than bailing out at the wrong time. A 10% drawdown on a 500K portfolio equals a drawdown of 50K which I can tolerate, but a 20% drawdown equivalent to 100K I could not.

In the Portfolio Builder I always ran it to maximize CAGR and minimize MaxDD. However that is not easy to do in QuantTrader. You have to select a setting in the heatmap, select Apply and view the MaxDD in the Strategy Summary panel. This is extremely laborious as lower volatility does not always guarantee a lower MaxDD.

Could you introduce a parameter to the optimizer so that it would only display results for a drawdown of less than x%, or at least display the Draw/Range in the heatmap summary panel?

###### 1 user thanked author for this post.

I have to disagree: max drawdown is very bad measure of risk since it’s one time event. You can’t measure risk, which is multidimensional by taking into account just one single point of time. Ulcer Index is much better, since it takes into account not only “depth” of drawdown but also “duration”, which is painful as well. You can sometimes even don’t know that your strategy had large drawdown if it was just a brief flash crash. But imagine if it’s 10% for many months…

I agree to both.

For me drawdown (and time to recovery, or duration) also is a very important soft KPI when looking at an equity line, and trying to figure out if my stomach can handle it. Finally, for most investors, drawdown (not CAGR, Volatility or Sharpe) at the end is the killing factor which makes them (..us) abandom a strategy. I think these are the points in the line of Richard.

On the other hand, I also agree with reuptake. Drawdown, or Maximum Drawdown to be exact, is only describing a historical one-time event within the timeframe of the equity. So statistically it is a weak measure, how frequently is the equity “under water” overall, how long and sharp are the corrections? Was that Max Drawdown due to a specific reason with high or low probability to repeat in future? Ulcer index or MAR improves some of the deficiencies, but is also harder to be interpreted..

So at the end, DrawDown for me stays an important SOFT measure when manually looking at equity curves, but one needs to be aware of the short-comings when it is to statistically evaluate or compare strategies..

And just to throw a new one into the round: R2, or the Coefficient of determination (https://en.wikipedia.org/wiki/Coefficient_of_determination or http://www.morningstar.com/invglossary/r_squared_definition_what_is.aspx) is a measure I´m looking more and more at. In simple terms it explains in percentage terms how close an equity curve is to a straight line (on a logarithmic scale), something we´re finally looking for.

Tom addedd a new thread about this measure recently, and it may also serve as answer to a question by Joachim some weeks ago on how to measure how good a strategy is performaning after inception compared to the backtest: If the R2 after inception is lower than the R2 of the backtest, it is a sign of weakness, or even curve-fitting..

Maybe this is a way to go?

.. may the discussion continue..

I put simple test data into Excel that compounds at a consistent rate, and the scatter plot allows for an exponential curve to be fit to the line, and includes an R2 calculation. I had thought would only calculate for a straight line, but it looks like it can just as easily do it for a curve or even a polynomial. So, you wouldn’t need to use a logarithmic scale to plot your data, you can calculate R2 directly from “curved” data. Just thought it might be worth mentioning.

To be precise: I’m not against using drawdown, I’m just against using max drawdown. Ulcer Performance Index (UPI) is all about drawdown. BTW: why you think Ulcer Index is hard to be interpreted (comparing to modified Sharpe)?

I have to dig deeper in R2 (there are also other measurements) but I’m not sure if “a straight line (on a logarithmic scale),” is “something we´re finally looking for”. Maybe strategy will have other characteristic (grows faster then logarythmic?)

But there is another, more fundamental question regarding Logical Invest strategies. My understanding is that we’re trying to use momentum factor on other characteristics of strategy than pure profit. Let me explain what I mean.

Momentum means that what was profitable in the past tends to be profitable in the future (statistically). This phenomenon is well described, there’s ton of research on it, including research which tries to determine if it’s momentum is real and what causes this inefficiency.

But we’re here pushing it bit further, if I understand correctly. We’re adding risk to the mixture. When using mod Sharpe or UPI we’re not only saying that “what was profitable in the past tends to be profitable in the future” but “what had some good profit/risk ratio will have good profit/risk ratio in the future”. This is interesting hypothesis, but again: it’s still hypothesis.

Another very interesting article, this time about risk adjusted momentum. Lot of work done here: http://www.investresolve.com/blog/dynamic-asset-allocation-for-practitioners-part-3-risk-adjusted-momentum/

This series of articles is one of my favourite, great stuff! Also make sure to read their “Adaptive Asset Allocation – a primer” whitepaper, excellent and I love these guys!!

We´ve replicated their strategies and get very close to the results they report, very trustworty and solid stuff. And their “Select Top X ETF based on Momentum and Volatility” is actually very close to our methodology in QuantTrader

… we leave it up to our valued followers to judge which is better 🙂 Public P-Contests are not our style, instead here just a Hip Hip Hurray to Adam Butler and gang!!

I have an optimization question using QuantTrader. It is not clear to me what parameters are being optimized when I click on “optimize”. Obviously the lookback period, but what else? It seems to magically improve the CAGR but I am unable to replicate the changes manually so I don’t have any feel for whether the optimizations are real or just a very accurate backtracing tool.

I have an optimization question using QuantTrader. It is not clear to me what parameters are being optimized when I click on “optimize”.

Lookback period / Volatilty attenuator

I have an optimization question using QuantTrader. It is not clear to me what parameters are being optimized when I click on “optimize”.

Lookback period / Volatilty attenuator

Thanks. So what is the value of the volatility attenuator if the field is blank? I had assumed zero.

The default value for the volatility attenuator is 1, e.g. if left blank the “modified sharpre ratio” we´re optimizing for becomes the standard sharpe ratio.

Here much more about the nuts and bolts involved, see especially video 3.5: https://logical-invest.com/video-tutorial-quanttrader-a-complete-walk-through-for-new-users/

- AuthorPosts

You must be logged in to reply to this topic.