By Michael Borella –

Among the many failings of the current U.S. patent eligibility framework under 35 U.S.C. § 101, perhaps none is more corrosive to the patent system’s basic function than the fact that the framework keeps changing. The judicial exceptions to patentability are, in theory, stable background principles. These exceptions limit the breadth and viability of inventions that encompass laws of nature, natural phenomena, and abstract ideas.
In practice, the Federal Circuit’s application of those principles to software patents over the twelve years since Alice v. CLS Bank has resembled less a coherent legal standard and more a series of freely-improvised opinions. Some of these opinions were briefly influential before being quietly diminished, and others have calcified into doctrine despite being logically indefensible. The result is a body of law that requires patentees and practitioners to track the current state of the doctrine on a monthly – if not weekly – basis.
In the immediate aftermath of Alice,the Federal Circuit went on something close to an invalidation streak. Ultramercial, LLC v. Hulu, buySAFE, Inc. v. Google, and Planet Bingo, LLC v. VKGS LLC all came down within months of the Supreme Court’s June 2014 decision, each finding ineligibility of the respective claims at dispute. Some in the patent community concluded, not unreasonably, that software patents as a class were effectively finished.
The first exception to this narrative was DDR Holdings, LLC v. Hotels.com, decided in December of that year, in which the Federal Circuit managed to find eligibility by characterizing claims as “necessarily rooted in computer technology” and directed at solving a problem “specifically arising in the realm of computer networks.” For a time, DDR was considered a lifeline, as it stood for the principle that an invention could survive under § 101 if it addressed a problem that was native to the internet itself, rather than merely implementing a pre-existing human practice online.
The wave built in 2016. Enfish, LLC v. Microsoft Corp. held that claims directed to improvements in computer functionality itself (and not just the use of a computer to accomplish some other task) might survive at step one of the Alice framework. The self-referential database at issue in Enfish was patent eligible because the claims focused on the computer’s own performance rather than on a result to be achieved using a computer.
Shortly thereafter, McRO, Inc. v. Bandai Namco Games America found non-abstract a method for automating lip-sync animation using a specific, defined set of rules, on the theory that the claims were directed to a concrete improvement in a technical process rather than to the general concept of animation. BASCOM Global Internet Services v. AT&T Mobility added an Alice step-two variant; notably, even if a claim is directed to an abstract idea at step one, an unconventional arrangement of known components can supply the inventive concept needed to survive step two.
For a period of roughly two years, from mid-2016 through 2017, these three cases represented what felt like a durable framework for software patent eligibility. Improve the computer, use specific unconventional rules, or arrange known components in a non-routine way, and you had a fighting chance.
The problem was that these principles were never given stable content. Improving computer functionality turned out to mean whatever a particular panel thought it meant on a given day. Claims that looked very similar to the Enfish database (e.g., specific, technically detailed, directed at improving computer performance or functionality) were found ineligible by other panels on the grounds that the claimed improvement was to the result achieved by using the computer, not to the computer’s own operation.[1] McRO‘s specific rules proved similarly slippery. It was unclear how specific the rules needed to be, and how one was supposed to distinguish a specific technical rule from a specific abstract idea. By 2018 and 2019, the DDR / Enfish / McRO / BASCOM quartet was still nominally alive, but in practice courts were citing these cases mainly to distinguish over them on the way to finding claims ineligible.
The next significant development was procedural rather than substantive. In Berkheimer v. HP Inc. (Fed. Cir. 2018), and Aatrix Software v. Green Shades Software, the Federal Circuit held that whether claim elements are “well-understood, routine, and conventional” under step two involves underlying questions of fact that cannot always be resolved on a motion to dismiss or summary judgment without an evidentiary record. This was genuinely significant, at least on paper. It meant that a defendant could no longer simply assert in a brief that every element of a software claim was generic, without any supporting evidence, and walk out with an early dismissal.
What happened next was predictable in retrospect. Courts narrowed the reach of Berkheimer and Aatrix in practice by holding that only specific, well-pleaded factual allegations could defeat a § 101 invalidity motion at the pleading stage. In other words, the burden to establish an evidentiary record was transferred from the challenger to patentee. By the early 2020s, these decisions had lost much of their practical force. Courts routinely found that the factual questions supposedly raised by Berkheimer were not genuinely disputed, or that a complaint’s allegations of unconventionality were too conclusory to count.
Through the early 2020s, the Federal Circuit continued generating § 101 decisions at a volume and inconsistency that made the law difficult to summarize in any concise way. Different panels reached different conclusions on claims that practitioners regarded as substantively similar. The “improvement to computer functionality” rationale from Enfish was applied in some cases and given lip service in others with no discernible pattern. The “internet-centric problem” reasoning from DDR shrank in practical scope, with courts increasingly skeptical that framing a problem as internet-specific was enough to confer eligibility. The BASCOM “unconventional arrangement” principle was rarely the deciding factor in eligibility determinations.
The most recent chapter involves artificial intelligence, and it is not encouraging. Recentive Analytics, Inc. v. Fox Corp. held that claims applying established machine learning (ML) methods to new data environments without any improvement to the underlying ML model are ineligible. This was billed as a case of first impression, and in one narrow sense it was. The Federal Circuit had not previously addressed ML claims in precisely these terms. But the holding simply applied the existing Alice framework’s skepticism about computer implementation to the ML context. The new wrinkle, and the dangerous one, is the implication that ML-based claims must demonstrate innovation within the model architecture itself, not merely in the application of the model to a new domain. This effectively sets a floor of technical specificity for AI patents that the prior case law had not explicitly required for other categories of software.[2]
What is not debatable is the trajectory of the last twelve years taken as a whole. In 2014, the rule was (more or less) that software is presumptively suspect, with no clear path to eligibility yet established. By 2016, the rule was that software can be eligible if it improves computer functionality, uses specific unconventional rules, or arranges known components in a new way. By 2018, the rule added a procedural gloss in that eligibility questions may involve facts that need to be developed. By the early 2020s, the rule had effectively contracted again, with the pro-eligibility rationales of 2016 being cited mainly as foils. By 2025, the rule for AI appears to be that the model itself must be novel, not just its use.
Each of these iterations was announced as an application of the same two-step Alice framework. None of them was formally repudiated when the next one arrived. They coexist in the case law as nominally live authority, cited by courts and practitioners depending on which way the argument runs.
This is not how legal doctrine is supposed to work. A test is supposed to produce consistent outcomes when applied to similar facts. The § 101 test as applied to software produces outcomes that are consistent mainly in their inconsistency. An inventor filing a patent application drawn to machine learning today cannot know whether their claims will be evaluated under the Enfish improvement-to-computer-functionality rationale, under the Recentive ML-specific rule, or some weighted combination of these and other factors. What they can know is that the goalposts will likely have moved at least once more before any patent that grants from the application can be asserted.
[1] This distinction between improving what the computer does and improving the result the computer produces proved to be one of the more philosophically complex and practically useless dividing lines in the post-Alice doctrine. From an engineering standpoint, an improvement can be found in both a more efficient process to produce a result as well as a process that produces a better result. Why one is more likely to be eligible than the other remains mysterious but likely has little to do with real-world concerns.
[2] This is another distinction that makes no sense. One of the technical challenges in the ML space is determining which ML model to apply to what data. The differences between model output given the same data can be extreme. Requiring ML inventors to innovate at the model layer rather than the use case layer is the equivalent of telling the inventors of the spreadsheet that their software was patentable only if they disclosed a new transistor.

Leave a comment