Turning the knobs – Parameter Optimization

Home Forums Logical Invest Forum Turning the knobs – Parameter Optimization

This topic contains 25 replies, has 6 voices, and was last updated by  Alexander Horn 1 year, 4 months ago.

Viewing 15 posts - 1 through 15 (of 26 total)
  • Author
    Posts
  • #35936

    Alexander Horn
    Keymaster

    Turning the knobs – Parameter Optimization

    #36357

    reuptake
    Participant

    What exactly is “Range Sharp” and how it compares to “Annual Sharp” (given the same period of backtest)? From what I see it’s not simply Annual Sharp * number of years, but something different?

    #36373

    Frank Grossmann
    Participant

    These are values found in the Optimizer window. Before running the optimizer you have to choose a chart range for the optimization in the main trader windows. You will set for example the “History range” of the graph to 2 years. This way the optimizer will run quite fast, because it will only optimize the parameters for the last 2 years. If you hover above the chart, then the optimizer will show you the results for a certain combination of lookback period (6-200 days) or volatility attenuator (0-10).
    The “Range return” will show the portfolio return for the selected 2 years. Same for Range Sharp. Much more important is the Annual Return (=CAGR Compounded Annual Growth Rate) and the Annual Sharpe. Only with these annualized values it is possible to compare results.
    By the way it may make sense to sometimes choose a chart range for the optimization which is smaller than the total available range of the ETFs. This way you can find strategy parameters which work better for a special market situation like for example the up and down sideways market we have since 2 years.

    #36376

    reuptake
    Participant

    Thanks for the answer. One important question: the color (or shade) is correlated with? which one: Annual Sharpe or Range Sharpe? It seems that it’s Range Sharpe which I find bit misleading, since – as you wrote – Annual Sharpe is more significant.

    #36457

    Alexander Horn
    Keymaster

    The shade is reflecting the Range Sharpe, the range which is optimized. Normally as Frank said, you would first optimize for a shorter range, and then extend the backtest once you find a stable range. This also helps to confirm the stability of the parameters.

    #36547

    mitch
    Participant

    Can you explain how the “volatility limit” in the parameter settings relates to the “volatility calculation” displayed in the summary window.

    As an example, I have a 60 day lookback period with a volatility limit of 8.

    When I set the history range to 2 months (60 days) and review the results in the summary window…. the volatility is 9.475

    I must be wrong, but I thought the program was calculating volatility as the standard deviation of returns over the designated lookback period.

    Wouldn’t this mean that both numbers should be the same?

    Thanks for your help!

    #36599

    Alexander Horn
    Keymaster

    Hello Mitch,

    thanks for your interest and the question. The volatility limit is based on historical volatility during the lookback period as you describe (annualized standard deviation). So whenever the historical volatility in the lookback period exceeds the limit, then the allocations for the next period are scaled down accordingly. This is a big assumption in the sense that past historical volatility is a prediction for the future – which obviously is not the case all the time.

    The prediction error is what you see: Instead of exectly matching the historical 8%, the real result is slightly above with 9.5%. Keep in mind that the algo is scaling down allocations, it is not using any stop mechanism (which would be harmfull in our experience).

    #39547

    Tom Gnade
    Participant

    As I’ve been playing with QT, I’ve noticed that the optimizer sometimes will fall on a highly optimal set of values that are surrounded by suboptimal solutions. If the heat map were “all white”, then it indicates there is very little sensitivity of the strategy to the lookback period or volatility limit. If the heat map is checkered randomly, then it is very sensitive. I’ve seen mention of the optimized results being “curve fit” in such a case, when the optimal solution lands on a single “bright white” square in the middle of a checkered field of landmines, and of the solution being somewhat resilient when surrounded by similar shades of white and gray.

    It might be interesting to program in an optimizer strategy that looks not for a single optimal value, but rather the largest cluster of stable values, because that represents a range of lookback periods and volatility limits that is resilient, meaning I would obtain similar results if I were to choose 95 or 100 or 90 days. That way, the recommendation is for the center-point of a cluster of stable values.

    #39549

    Alexander Horn
    Keymaster

    That´s an interesting approach, guess such cluster algorithms are researched already, let´s see if we find an application. For me it´s always interesting to see how close these optimization algorithms are to image processing.

    My personal approach to the optimization heatmap is less scientific though. My brain works very visual, so I need to see and “feel” how sensitive the performance is to parameter ranges. Very often I´m then either confident in the result, take them with a % “grain of salt” discount or even see other “white spots” which look more interesting than what QT has found.

    Interested to hear more opinions on how we can further enhance this functionality.

    #39550

    reuptake
    Participant

    I’ve read several times that you can get false impression of stable results, when using lookback periods. Thing is the difference between 2 and 3 days lookback is 50%, between 100 and 101 days 1%. So the backtest results will be more stable around large values of lookback. So it’s said that one should rather use multiplies as lookback.

    #39685

    Tom Gnade
    Participant

    I have 3 suggestions for optimization:

    1) Prior to optimizing a strategy, offer an “optimization goal” that can be any of
    a) maximize overall return
    b) minimize volatility
    c) etc…
    with target thresholds, like 20%, 15%, 10%, 5%. It may be that the strategies and other settings already accomplish this though, so it might be redundant.

    2) Offer a checkbox that would search for optima with stable nearby values (a large hot spot on the heat map)

    3) I suggest allowing the option to “continuously reoptimize”, with a selected optimization period, rather than force optimization over the currently-selected historical display range. Those 2 things should be separate, with the historical display for presentation only.

    In my experiments with the software, I noticed that it is possible to optimize over a shorter time period and achieve better results than using the optima over a very long time period. I’d like to see a simulation over a long period of time hat uses a continuous optimization approach, so the lookback period is slowly changing as the underlying instruments go through different high/low volatility and returns regimes.

    #39726

    Frank Grossmann
    Participant

    This is true, however also long lookback periods can be dangerous because the strategy performance may result of only a few ETF changes. A result which is based of many ETF changes is much less prone to such hazards. You can also see if a system is stable in the graphical optimizer window. If there are big performance jumps with only slightly different lookback periods, then the system is not really stable.

    #42592

    Tom Gnade
    Participant

    This may not be the correct forum to propose this, but do you think it would be worthwhile exploring a different kind of optimization? I think it would be very interesting to minimize the “time to recovery” as an alternative optimization strategy. In other words, any time the strategy achieves a “high water mark” in value, we measure the amount of time it requires for the strategy to exceed that level, regardless of volatility. Rather than minimizing volatility per se, we are minimizing the time to recovery for the strategy or portfolio. As soon as the previous high water mark is surpassed, a new one is set. The optimization would search for the lowest average TTR value in days.

    I have no idea if it would produce good results, but it would most definitely be an interesting test. Thoughts?

    #42655

    reuptake
    Participant

    [quote quote=42592]This may not be the correct forum to propose this, but do you think it would be worthwhile exploring a different kind of optimization? I think it would be very interesting to minimize the “time to recovery” as an alternative optimization strategy. In other words, any time the strategy achieves a “high water mark” in value, we measure the amount of time it requires for the strategy to exceed that level, regardless of volatility. Rather than minimizing volatility per se, we are minimizing the time to recovery for the strategy or portfolio. As soon as the previous high water mark is surpassed, a new one is set. The optimization would search for the lowest average TTR value in days.
    I have no idea if it would produce good results, but it would most definitely be an interesting test. Thoughts?
    [/quote]

    This is actually a great idea. I’d love to see the results for Ulcer Performance Ratio, which measures both drawdown and length of drawdown.

    #42678

    reuptake
    Participant
Viewing 15 posts - 1 through 15 (of 26 total)

You must be logged in to reply to this topic.