Hacker Newsnew | past | comments | ask | show | jobs | submit | irldexter's commentslogin


Added a changelog as per my previous comment/response https://jumpstartsignal.com/changelog/ Thanks for the feedback, makes sense!

So, you kind of hit the nail on the head with "if" and "purpose". So, this is not an ESG fund (per your link) and more about baked-in filtering/excluding certain parts of the investable universe (with additional criteria), for an SRI (Socially Responsible Investing) approach. At the end of the day, some people simply don't want to profit or invest in certain industries, regardless of the return differential. I would say that's really a personal call, not a financial mistake.

No hedge fund background, fair cop. I'm a software developer who got frustrated with discretionary noise and built something rules-based.

On the bull run, alpha here is measured against SPY, which ran the same period. The claim is relative outperformance, not that we picked a rising tide. The window also includes 2020 and 2022, not great for momentum strategies. Signals that showed near-zero predictive value in statistical validation are zeroed out of the scoring.

Paper trading is already running against live signals. The daily reports are public and free, no subscription needed to check. Methodology is at https://jumpstartsignal.com/how-it-works/ if you want the detail on what's actually being validated.


Unless you know exactly why paper trading sims are so hard to backrest in practice, it’s silly to make arguments on why your paper trading sim works.

It’s insanely easy to make a trading algo profitable on historical data.


Overfitting on historical data is a real risk and defo a concern (there's been lots of learnings lately). The backtest wasn't naive. Fundamentals used filing dates not period-end dates to avoid look-ahead + scoring was validated out-of-sample using walk-forward testing rather than just optimised in-sample (GA used 5 temporal folds and walk-forward used 25 rolling out-of-sample windows).

One last question - as a percentage of your net worth, how much of your own money have you put into the stock picks generated through this analysis?

I have a feeling I know the answer, but wanted to check.


Well, it's early, so there's only been 2 actionable signals so far; 1 unique ticker SPOTLIGHT and 1 OPPORTUNITY (MONITOR is just a watchlist flag i.e. continue to MONITOR). Am putting in 1K per SPOT/OPP so far. I also maintain ETF and ETC positions (and have good/bad performing stocks since before JSS). So yes, I am putting my money where my mouth is. I have another 10K ready to deploy specifically to this, and other underperforming sources to draw from based on performance.

The question was percentage of net worth :)

That will tell me how confident you are.

$10k out of $500k would say you’re toying around.

$10k of $20k seems like you believe it works.


Grok is that you?

Nope.

Those emojis on there take away from any seriousness you are trying to project and smells of AI generated text.

Hmmm, only did one ascii emoji in a different response. An old habit from the early days. Point noted.

That's the pipeline being honest :) no forced picks on a slow day.

On ESG/SRI: fair, excluding sectors comes at a cost, and we make that trade-off knowingly.

On stock picking: the system is rules-based and mechanical, not discretionary. The "folly" argument applies most strongly to human judgment calls, which this attempts to remove. I literally wanted to reduce bias and get a better vantage point.

On beating the index: 14 years of backtested data with walk-forward validation suggest it's possible for this specific strategy. Whether it holds going forward, nobody knows. We publish the ten best and worst precisely because we're not claiming certainty.


Hi, Donal from JSS here. We're on R27, revision 27 of the signal weights and features. Each revision gets snapshotted as a "golden" version in config, run through a full backtest, and the results pages pull dynamically from that snapshot, so the numbers are always anchored to a specific revision.

The round numbering is partly for exactly the reason you named: it forces a name onto every change. When a result comes out wrong, the temptation to quietly shift a threshold is real, and having to call it R28 and re-run the full validation raises the cost of doing that on a whim.

Perhaps a changelog would close the loop though? Right now, R27 is visible in config and referenced in the metrics, but there's no page that says "R27 changed X because Y, here's what the backtest/walk-forward showed before and after." That's the missing accountability layer, and probably more useful to a skeptical reader than any amount of methodology prose.


I'm the author and am happy for any feedback + to answer any questions about this deep dive. It was fun to write about and learn more as I continue my 802.11 journey + building tools.


This is great feedback thanks and a very interesting point. Most laptops these days are constantly chatting towards the Internet, just have a look at wireshark! But, the local LAN tests are preceded by IPv4 and IPv6 Internet reachability tests (which the article mentions regarding only taking results from hosts that were concurrently online with dualstack lighthouse reachability. This test suite takes about 7 seconds total and records about 25-30 data points). If we re-run the test we could indeed let IPv6 go first or do them in parallel to account for any sleeping radios.


Surprised no one has linked to this yet https://www.levels.fyi/?compare=Google,Facebook,Microsoft&tr... where you can view the salary grades of the FAANG's compared to other public/private inc stock, as reported by staff.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: