So, you kind of hit the nail on the head with "if" and "purpose". So, this is not an ESG fund (per your link) and more about baked-in filtering/excluding certain parts of the investable universe (with additional criteria), for an SRI (Socially Responsible Investing) approach. At the end of the day, some people simply don't want to profit or invest in certain industries, regardless of the return differential. I would say that's really a personal call, not a financial mistake.
No hedge fund background, fair cop. I'm a software developer who got frustrated with discretionary noise and built something rules-based.
On the bull run, alpha here is measured against SPY, which ran the same period. The claim is relative outperformance, not that we picked a rising tide. The window also includes 2020 and 2022, not great for momentum strategies. Signals that showed near-zero predictive value in statistical validation are zeroed out of the scoring.
Paper trading is already running against live signals. The daily reports are public and free, no subscription needed to check. Methodology is at https://jumpstartsignal.com/how-it-works/ if you want the detail on what's actually being validated.
Overfitting on historical data is a real risk and defo a concern (there's been lots of learnings lately). The backtest wasn't naive. Fundamentals used filing dates not period-end dates to avoid look-ahead + scoring was validated out-of-sample using walk-forward testing rather than just optimised in-sample (GA used 5 temporal folds and walk-forward used 25 rolling out-of-sample windows).
Well, it's early, so there's only been 2 actionable signals so far; 1 unique ticker SPOTLIGHT and 1 OPPORTUNITY (MONITOR is just a watchlist flag i.e. continue to MONITOR). Am putting in 1K per SPOT/OPP so far. I also maintain ETF and ETC positions (and have good/bad performing stocks since before JSS). So yes, I am putting my money where my mouth is. I have another 10K ready to deploy specifically to this, and other underperforming sources to draw from based on performance.
On ESG/SRI: fair, excluding sectors comes at a cost, and we make that trade-off knowingly.
On stock picking: the system is rules-based and mechanical, not discretionary. The "folly" argument applies most strongly to human judgment calls, which this attempts to remove. I literally wanted to reduce bias and get a better vantage point.
On beating the index: 14 years of backtested data with walk-forward validation suggest it's possible for this specific strategy. Whether it holds going forward, nobody knows. We publish the ten best and worst precisely because we're not claiming certainty.
Hi, Donal from JSS here. We're on R27, revision 27 of the signal weights and features. Each revision gets snapshotted as a "golden" version in config, run through a full backtest, and the results pages pull dynamically from that snapshot, so the numbers are always anchored to a specific revision.
The round numbering is partly for exactly the reason you named: it forces a name onto every change. When a result comes out wrong, the temptation to quietly shift a threshold is real, and having to call it R28 and re-run the full validation raises the cost of doing that on a whim.
Perhaps a changelog would close the loop though? Right now, R27 is visible in config and referenced in the metrics, but there's no page that says "R27 changed X because Y, here's what the backtest/walk-forward showed before and after." That's the missing accountability layer, and probably more useful to a skeptical reader than any amount of methodology prose.
I'm the author and am happy for any feedback + to answer any questions about this deep dive. It was fun to write about and learn more as I continue my 802.11 journey + building tools.
This is great feedback thanks and a very interesting point. Most laptops these days are constantly chatting towards the Internet, just have a look at wireshark! But, the local LAN tests are preceded by IPv4 and IPv6 Internet reachability tests (which the article mentions regarding only taking results from hosts that were concurrently online with dualstack lighthouse reachability. This test suite takes about 7 seconds total and records about 25-30 data points). If we re-run the test we could indeed let IPv6 go first or do them in parallel to account for any sleeping radios.
reply