After watching the real NBA front offices put up questionable trades these past few years (yes you Nico Harrison), I figured I'd try my hand at the job. GM Mode is the result: take control of any team, propose a swap, and find out whether the league office would actually let it through. The deployed app even lives at fire-nico.streamlit.app.

What it does

GM Mode answers the question every NBA front office actually asks: "Can I trade Player A for Player B without the league office bouncing it?" You pick two teams, drag players from each side into the trade, and the app tells you immediately whether the deal is legal, and if it isn't, exactly which rule it broke.

It also goes one step further. A second engine takes the candidate pool from both teams and finds the best legal trade: the one that maximizes total value gained across both sides. Often the optimizer's suggestion is a totally different package than what you proposed, and you can see exactly how much value you were leaving on the table.

Hit "Run trade" and three different solvers fire. PicoSAT checks legality in milliseconds. Google's OR-Tools CP-SAT and the open-source PuLP+CBC race to find the optimal alternative trade. They cross-check each other. If they disagree, the encoding has a bug.

Try it yourself

Running on Streamlit Cloud Open in new tab ↗

The three solvers

Trade evaluation happens in three layers, each handing its output to the next. The split exists because the rules of the NBA's Collective Bargaining Agreement break cleanly into "yes/no boolean rules" (no-trade clauses, recently-signed players, roster size limits) and "arithmetic rules" (salary matching, hard cap). Different solvers are built for each kind.

Solver Job How it works
PicoSAT Boolean legality CDCL (conflict-driven clause learning). Encodes NTC, recently-signed flags, and roster size as a CNF formula. Returns yes/no plus the set of locked players the optimizer has to leave alone.
OR-Tools CP-SAT Best legal trade Google's lazy clause generation engine. Branch-and-bound over the candidate pool with aggressive presolve, cutting planes, and SAT-style clause learning. Maximizes total valuation gained.
PuLP + CBC Cross-check COIN-OR's open-source MIP solver. Solves the same problem as OR-Tools, independently. If both report the same optimal value, the encoding is correct. "Solvers agree" badge.

Why player value isn't a single number

Wembanyama is worth more to a rebuilding team than to a contender. A 33-year-old veteran on a max contract is worth more to a contender than to a rebuilder. The same player, the same stats, two different valuation numbers depending on who's receiving him.

The valuation engine is a Gradient Boosted Tree: 200 small decision trees stacked, each one trained to fix the previous trees' mistakes. Inputs are nine features per player: age, salary, BPM, VORP, true-shooting percentage, positional fit (does the receiving team need this position?), rebuild score (are they contending or rebuilding?), age × rebuild interaction, and salary efficiency. Three of those nine features change with the receiving team's context. That's why the same player gets different scores in different trade scenarios.

The model is trained on 2,000 synthetic examples plus the live roster. In production, it would be retrained on real trade outcomes. The architecture is set up for it. For the demo, the synthetic targets follow a hand-crafted formula so the predictions stay interpretable.

Tech stack


Built with Python, Streamlit, scikit-learn, OR-Tools, and python-sat. Jason Fang, 2026.