Strategy Parameter: Advanced – Out of Sample Parameters

Home Forums Logical Invest Forum Strategy Parameter: Advanced – Out of Sample Parameters

Viewing 6 posts - 1 through 6 (of 6 total)
  • Author
    Posts
  • #79833
    StefanM
    Participant

    Hi LI team,

    I am planning some Out-of-Sample (‘OOS’) testing in QT and noticed that under Advanced Strategy Parameters ther appear to be some settings for running OOS testing.

    Is there a video or some training available so we can utilise this please? If not, would someone in the community be able to provide a quick overview of how to use it please?

    Thanks

    Stefan

    #79841
    Alexander Horn
    Keymaster

    Hi Stefan,

    the OOS testing is still work in process, but we’re actively using it. See here: https://logical-invest.com/walk-forward-testing-avoid-curve-fitting-backtesting/

    #79847
    StefanM
    Participant

    Thanks very much Alex, the article was interesting reading. I will use the OOS QT methodology and see how I get on (it has certainly saved me from creating my own approach within QT!).

    I am sure you have come across the book ‘Systematic Trading’ by Robert Carver; he was a London-based manager of systematic hedge funds. One of his key points when optimising strategies is to avoid ‘cheating’ (to use his language). By which he means that by optimising over in-sample data, the parameters thus generated would never have been available to use within that data set and thus the performance indicated on a backtest over this data would not have been what was actually achieved. My interpretation is that we would have been at risk of greater volatility and greater drawdown than the backtest would have suggested. In short, none of us has a time machine available to go back and trade using the parameters just identified.

    In a way very similar to your approach, he then says that we should craft a rolling series of sample vintages (i.e. rolling series, or chunks, of sample data) on which to generate optimisations which are then applied to the immediately following out of sample data period over which the previously-identified parameters are backtested.

    The challenge, as you have identified in the article, is that the natural next step is to generate consecutive sample vintages and their subsequent optimised parameters; once you have say, five or ten sample vintages, how do you take the results of these vintages and convert them into a single set of parameters to use going forward? Manual observation, or averaging or perhaps let QT do the heavy lifting?
    One goal would be for each sample vintage to cover each subset of market environments, pre crash, post crash, market rise, market top etc. In that way hopefully we would have a suite of parameters for each environment. The challenge would then be how to weight each set of market environment parameters in a non-discretionary way…

    #79855
    Alexander Horn
    Keymaster

    Hi Stefan,

    yes, indeed the question of how to glue the parameter results from the different data chunks together is interesting. Some use moving averages, others set limits for the max change of parameters between data chunks.

    We’ve done walk-forward testing also in other tools like quantshare and amibroker. The problem often is the increasing complexity which causes you to get lost in academic exercises on how to best massage the data, rather than the fundamental question on how to get to a robust and simple system.

    In my very personal experience very often taking an educated guess or rough ballpark estimation of a stable parameter range looking at a heatmap of backtest outcome gives the best results. Validating these with more sophisticated tests gives more confidence – even if this sounds like inverting the process.

    #79864
    bmessas
    Participant

    Hi Alex,

    I just wanted to clarify my understanding of the OOS Algorithm. in your example @ https://logical-invest.com/walk-forward-testing-avoid-curve-fitting-backtesting/, we are looking for the best sharpe over the last 60 months by adjusting the variable Lookback period and volatility attenuator. and we keep doing the same operation at each rebalancing date.
    if this is the case, choosing the best sharpe could overtfit as well as it could be a single point that optimize locally the result vs looking a visual interpretation in the manual mode.

    #79867
    Mark Vincent
    Participant

    Here is my experience over the last 20 years which is my opinion,

    I have followed this topic on many investment sites and they all failed to produce Alpha OS. You can have the best strategy to assure you don’t over fit but if the market forces change you are finished. You need to have an edge and a Hypothesis of what the market will do going forward because the past may not hold true. I think the current stats are 95% of active managers don’t beat the market over 10 years. That’s how hard it is to get Alpha. The only thing that matters is OS. You can spend all your time on making sure you don’t curve fit but it may not help. Here is what I do:

    1. Use uncorrelated assets (This is an Edge)
    2. Use risk parity
    3. Use LI to help you do both (The strategies of strategies model is great place to start)
    4. Come up with your own Hypothesis that will give you an Edge and allocate 10% to it. Mine is FAANGM.

    Good Luck,
    MV

Viewing 6 posts - 1 through 6 (of 6 total)
  • You must be logged in to reply to this topic.