skip to content
Terry Li

A repository on GitHub crossed forty thousand stars. The framework was a serious contender for replacing four separate tools I had assembled by hand. I ran the evaluation, scored it across three test cases ranging from easy to deliberately hostile, and arrived at the verdict any honest comparison should produce: no uplift over what I already had. The framework lost on clean pages, tied on medium-difficulty bot detection, and failed outright on the hardest target. I filed the finding, archived the test artifacts, and moved on.

That should have been the right move and it was the wrong move, in the same breath. The framework verdict was correct. The closing of the file was premature.

Forty thousand stars is voting on something. People do not adopt a project at that scale without it delivering value somewhere. My instinct said the value was integration — one library replacing four tools is a real product proposition for someone whose alternative is to assemble four tools themselves. But I had already done the assembly. My alternative was not the same as their alternative. The integration value did not transfer to me. The framework was not for me, and the verdict reflected that, and the verdict was honest, and yet the value the forty thousand stars were voting on was not zero. Some of it had to be in the dependency tree.

When I went back and read the project’s pyproject.toml line by line, three primitives in the dependency list closed gaps I did not know I had. One was a TLS-fingerprinting HTTP client that impersonates browser handshakes at the network layer — sites that block our existing fetch chain on the basis of TLS signatures, which is a real failure mode I had been working around manually, would never block this. Another was a fingerprint-generation library trained on real-world browser data, designed to make headless automation indistinguishable from genuine traffic. The third was a fork of Playwright that patches the webdriver flag at the binary level, which our JavaScript-based stealth patches structurally cannot do. Each of these was independently authored, independently maintained, independently installable. The framework wove them together but did not own them. I could absorb each one into the right existing tool without taking on the framework’s surface area or release cadence.

Two of the three landed cleanly within the same afternoon. The third revealed an architectural conflict and was reverted by evening. Net outcome — two real capability upgrades, one cautionary tale, no framework lock-in. The cautionary tale was its own dividend, because what I learned is that component-isolation results do not predict stack-composition results, and I now have a discipline for catching that earlier. But the deeper point is the rejection-and-harvest move itself: the framework verdict was the start of the evaluation, not the end.

I think this generalises beyond software dependencies. When a methodology, vendor product, regulatory framework, or research program comes through the door and the answer to “should we adopt this?” is no, the natural reflex is to write the rejection memo and close the file. That reflex is the failure. The thing being rejected is almost never atomic. It has component pieces — definitions, instruments, data, primitive operations — and those components were chosen by someone smart for a reason that may apply even when the wrapper does not. Rejecting the wrapper is not rejecting the components; it is just removing the bundle that bound them together for someone else’s stack. The harvest is the part the rejection memo skips, and skipping it is how you leave gold on the table while feeling like you have done due diligence.

The discipline is small and it costs almost nothing. After a rejection verdict, list the components. For each component, ask whether the gap it closes exists in your stack. For each gap that exists, plan an integration path into your existing infrastructure. Do not adopt the wrapper, do not change your architecture to match theirs, do not sign up for their release cycle. Just take the piece that closes the gap, attribute the source, and put the wrapper back on the shelf where it belongs. The yield from this discipline is consistently better than the yield from picking adoption decisions correctly, because most adoption decisions are obvious and most primitive-harvest passes are skipped.

A framework can be unworthy of adoption and its dependency tree can still hold gold. The two propositions are not in tension. The mistake is collapsing them into one verdict and treating the verdict as the close of the evaluation. The verdict on the wrapper closes one question. The harvest is a different question, and it is almost always the more valuable one.

· · ·

Keep reading