Outlier Patent Attorneys

Software Patents: A Comprehensive Guide

Insights

The entire abstract idea jurisprudence (which affects almost all software patents) is built upon a single sentence—nine words of operative text, written in 1952, revised from language dating to 1793—that has, in the last decade, annihilated more intellectual property than any other provision in the entire United States Code. The sentence does not mention software. It does not mention computers. It does not mention artificial intelligence or machine learning or neural networks or any of the technologies that, as of this writing in early 2026, constitute roughly the entire forward thrust of the American economy. The sentence says this:

"Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title."
— 35 U.S.C. § 101

That's 35 U.S.C. § 101. Read it again. It sounds generous, doesn't it? Expansive, even. The Supreme Court once said, in Diamond v. Chakrabarty, 447 U.S. 303 (1980)—a case about whether you could patent a bacterium engineered to eat crude oil, which, yes, you could—that Congress intended this language to encompass "anything under the sun that is made by man."[^1]

[^1]: The distance between "anything under the sun" and the actual, lived experience of trying to patent a software invention at the United States Patent and Trademark Office in 2025 is roughly the distance between a travel brochure's description of a Caribbean resort and the experience of arriving to discover the resort is a construction site inhabited by feral cats. The brochure is, technically, not lying. But the brochure is also not telling you anything you need to know.

What follows is long, detailed, and—the author hopes—occasionally entertaining deep dive in to the world of abstract ideas. If you are a founder or developer who wants practical, actionable guidance on how to actually get a software patent through the USPTO right now, you should read our guide to software patent eligibility—it has case studies, drafting best practices, and the kind of tactical specificity that this blog post, by virtue of its scope, cannot provide. If you are building in AI or machine learning and want the most current lay of the land—Recentive, Desjardins, inventorship, open-source entanglements, the whole mess—our 2026 guide to patenting AI technologies is, frankly, the best single resource the author has encountered. And if your question is more targeted—can I patent this specific ML technique, or is it going to get killed at Step 2A?—our breakdown of what can and cannot be patented in AI and ML will save you time and, possibly, money.

I. The Supreme Court's Three Exceptions to "Anything" (and Refusal to Explain What They Mean)

Here is the thing about "anything under the sun that is made by man" as a standard for what can be patented: the Supreme Court did not actually mean it.

Or rather—and this is a distinction that matters, though it is the kind of distinction that makes non-lawyers want to set things on fire—the Court meant it in the way that a parent means it when they tell a child "you can have anything you want for dinner" and then, when the child requests ice cream, clarifies that by "anything" they obviously didn't mean that.

The Court has identified three categories of things that, despite being encompassed by the statutory text that appears to encompass everything, are in fact not patentable. These are called "judicial exceptions," which is a polite way of saying "things the Supreme Court decided to exclude without any particular textual basis for doing so."[^2] The three exceptions are:

[^2]: This is not the author's characterization alone. Multiple Supreme Court justices, Federal Circuit judges, and approximately the entire patent bar have noted, with varying degrees of diplomatic restraint, that the judicial exceptions are not found anywhere in the text of § 101. Justice Thomas, writing for a unanimous Court in Alice Corp. v. CLS Bank, 573 U.S. 208 (2014), acknowledged this forthrightly, noting that the exceptions exist because "laws of nature, natural phenomena, and abstract ideas are the basic tools of scientific and technological work" and that monopolizing them "might tend to impede innovation more than it would tend to promote it." Whether this policy rationale justifies reading exclusions into a statute that contains none is, to put it gently, a matter of ongoing debate.

  • Laws of nature. You cannot patent E=mc². Fair enough.

  • Natural phenomena. You cannot patent a naturally occurring mineral or an unmodified human gene. Also fair enough.

  • Abstract ideas. You cannot patent an "abstract idea."

And here is where things go—and I want to choose this word carefully—insane.[^3] (For a useful primer on where the boundaries of patentable and unpatentable subject matter actually lie—particularly for AI and machine learning technologies, where the confusion is most acute—we have a practical breakdown on patent eligibility that is worth your time.)

[^3]: The author recognizes that "insane" is not a clinical term and that its casual use is potentially offensive. But having spent several hundred hours reading Federal Circuit opinions attempting to determine whether particular software claims are "directed to" abstract ideas, the author submits that no other word in the English language adequately captures the phenomenological experience of engaging with this body of law.

Because the Supreme Court has never—not once, not in Bilski v. Kappos, 561 U.S. 593 (2010), not in Mayo Collaborative Services v. Prometheus Laboratories, 566 U.S. 66 (2012), not in Alice—provided a comprehensive definition of "abstract idea." In Alice, the Court expressly declined to "delimit the precise contours of the 'abstract ideas' category." Which is a bit like a judge telling you that you've been convicted of a crime and then declining to tell you which crime, exactly, you've committed.[^4]

[^4]: What exists instead is a series of examples. Mathematical algorithms are abstract: Gottschalk v. Benson, 409 U.S. 63 (1972). Mathematical formulas applied to industrial processes are abstract: Parker v. Flook, 437 U.S. 584 (1978). Risk hedging is abstract: Bilski. Intermediated settlement is abstract: Alice. But incorporating the Arrhenius equation into a specific rubber-curing process is not abstract: Diamond v. Diehr, 450 U.S. 175 (1981). If you can identify the coherent principle that unites these outcomes, you are either a genius or delusional, and the Federal Circuit has been unable to determine which.

What we have instead of a definition is a test—a two-step analytical framework that sounds elegant in theory and is, in practice, a machine for generating confusion, inconsistency, and the quiet despair of patent practitioners nationwide.

II. The Alice/Mayo Framework (A Two-Step Dance on a Floor Made of Quicksand)

In June 2014, the Supreme Court decided Alice Corp. v. CLS Bank International, 573 U.S. 208, and everything changed. Alice Corporation held patents on a computer-implemented scheme for mitigating settlement risk in financial transactions—basically, using a computer as an intermediary to make sure both parties to a deal actually have what they claim to have before the deal closes. The Court, unanimously, said: nope. Abstract idea. Not patentable.

The opinion established what is now called the Alice/Mayo two-step framework, after the earlier Mayo decision that created the test for laws of nature:

  • Step One asks: are the claims "directed to" a judicial exception? Is this, at its core, an abstract idea?

  • Step Two asks: if yes, do the claims contain an "inventive concept"—some element or combination of elements that is "significantly more" than the abstract idea itself?

The Court then held that merely implementing an abstract idea on a generic computer—a "data processing system," a "communications controller," a "data storage unit" performing conventional functions—does not supply the requisite inventive concept. The opinion warned against interpretations that would "make patent eligibility depend simply on the draftsman's art."

The decision never uses the word "software." It never says software patents are invalid. And yet.

And yet roughly 64% of software patents challenged on § 101 grounds since Alice have been invalidated. The PTAB affirms examiner § 101 rejections approximately 88% of the time. In 2024, the Federal Circuit decided 22 patent cases on substantive § 101 grounds and found claims eligible in exactly one.

One.

Out of twenty-two.

III. In Which the USPTO Tries to Impose Order on Chaos and Partially Succeeds

To understand what happened next, you have to understand something about the relationship between the USPTO and the federal courts. The USPTO examines patent applications. The federal courts review those patents later, in litigation. And the federal courts are not bound by the USPTO's interpretation of § 101. This means the USPTO can tell its examiners one thing, and the Federal Circuit can tell the entire world something different, and both of these things can be happening simultaneously, and you—the inventor, the startup founder, the person who actually made the thing—are stuck in the middle, paying legal fees to navigate a system that cannot agree with itself about what its own rules mean.

In January 2019, under Director Andrei Iancu, the USPTO issued the 2019 Revised Patent Subject Matter Eligibility Guidance, commonly called the "2019 PEG," and it was—genuinely, substantively—a good-faith attempt to bring order to the post-Alice chaos.[^5]

[^5]: Director Iancu, to his considerable credit, publicly described the state of § 101 jurisprudence as creating "a moral crisis" in the patent system. He was not exaggerating. The pre-PEG examination landscape was one in which two examiners reviewing functionally identical claims could reach opposite conclusions because neither had any principled basis for determining what counted as an "abstract idea." The PEG was designed to fix this by replacing vibes-based analysis with an enumerated framework.

The 2019 PEG restructured the Alice/Mayo analysis into what is, on paper, a manageable decision tree. Here is how it works, and I am going to walk through it in detail because if you are a software developer or a founder or a patent attorney, this is the framework that governs your professional life:

Step 1 is easy: does the claim fall within a statutory category? Is it a process, machine, manufacture, or composition of matter? For software, the answer is almost always yes.

Step 2A, Prong One is where the PEG made its most important move. Instead of the prior approach—which was, in essence, "compare your claims to random Federal Circuit cases and see if a judge squints at them the same way"—the PEG established three enumerated groupings of abstract ideas:

(a) Mathematical concepts: mathematical relationships, formulas, equations, and calculations.

(b) Certain methods of organizing human activity: fundamental economic practices, commercial or legal interactions, and managing personal behavior or relationships.

(c) Mental processes: concepts that can be performed in the human mind, including observation, evaluation, judgment, and opinion.

And here is the critical part: if the claim does not recite limitations falling within these three groupings, the claim is eligible, full stop. The examiner is done. The analysis ends. An examiner who wants to reject a claim as directed to an abstract idea outside these three groupings must obtain Technology Center Director approval before doing so. This was a genuine constraint on the previously unbounded discretion examiners wielded.

Step 2A, Prong Two was the PEG's other major innovation. Even if a claim recites an abstract idea from the three groupings, it is still eligible if it "integrates" the exception into a "practical application." This integration inquiry looks at whether the claim applies the exception in a way that imposes a meaningful limit beyond merely drafting around it. Claims that improve computer functionality, apply the exception with a particular machine, or transform a particular article can integrate the exception. Claims that merely instruct you to "apply it" on a generic computer, or add insignificant extra-solution activity like routine data gathering, do not.

Crucially—and this is something patent attorneys still get wrong—at the Prong Two stage, it does not matter whether the additional elements are well-known or conventional. Even conventional elements can integrate an exception into a practical application. The "well-understood, routine, conventional" analysis only applies at Step 2B, which is the last resort.[^6]

[^6]: Step 2B is, practically speaking, where claims go to die. If your claim has failed both prongs of Step 2A, the examiner is now asking whether the claim contains "significantly more" than the abstract idea, and the additional elements must be more than what a skilled artisan would consider routine. Per the Berkheimer memorandum, any finding that elements are conventional must be supported by evidence—but in practice, examiners have become adept at citing the applicant's own specification or official notice to establish conventionality. If you've reached Step 2B, your application is in serious trouble.

The PEG worked—initially. The rate of office actions containing § 101 rejections dropped from roughly 25% in 2018 to 15% in 2020. The USPTO's own data showed a 25% decrease in first office actions with § 101 rejections for Alice-affected technologies. The system exhaled.

And then the system inhaled again.

IV. The Rejection Rates Come Roaring Back, or: Entropy Always Wins

By January 2024, § 101 rejection rates had climbed back to approximately 24%—within one percentage point of the all-time high. The gains of the PEG era had, in the aggregate, evaporated.

The spike was not evenly distributed. It was concentrated, with the precision of a guided missile, in the technology areas where it could do the most damage. Working Group 2120—AI and Simulation Modeling, within Technology Center 2100—saw 77% of office actions include a § 101 rejection in 2024. Seventy-seven percent. That is not a speed bump. That is a wall. And these are the art units examining the patents that purport to protect the very technologies—machine learning, generative AI, large language models—that the United States government has simultaneously identified as critical to national competitiveness.

The irony here is rich enough to make you weep, or laugh, or both. The United States of America, in the year 2024, was telling its patent examiners to reject three-quarters of AI patent applications as directed to abstract ideas while simultaneously telling the world that American leadership in AI is essential to national security.

The numbers tell the story with the kind of blunt clarity that Vonnegut loved and Wallace would have footnoted into infinity. Technology Center 2100 (software, AI) has a 79% overall allowance rate—sounds good until you realize the § 101 rejection rate within AI-specific art units is nearly four times higher than the center average. TC 2400 (networking) hits 90% allowance. TC 2600 (communications) hits 97%. But TC 3600—business methods, e-commerce, fintech—is the seventh circle of patent prosecution hell. Some e-commerce art units approach 100% § 101 rejection rates. TC 3600 generates over 76% of all § 101 appeals to the PTAB.

If you are a fintech founder reading this: I am sorry. I do not have good news for you. What I have is accurate news, which is a different and sometimes worse thing.

And if you are building in digital health—AI-powered diagnostics, remote patient monitoring, software-as-a-medical-device—the news is not much better, and in some ways more complicated, because you are navigating § 101 abstract-idea rejections and FDA regulatory requirements and HIPAA data obligations simultaneously, a trifecta of bureaucratic pain that no single human being should be expected to understand without help. (Outlier Patent Attorneys has a comprehensive guide to protecting digital health technologies that covers the IP, data, and regulatory dimensions together—which is how they should be covered, since in practice they are inextricable.)

V. The Cases That Drew the Map: Where Software Patents Live and Die

The post-Alice Federal Circuit case law is, in a very real sense, the actual law of software patent eligibility—the 2019 PEG governs examiners, but it is the Federal Circuit that decides what survives litigation. And the Federal Circuit's map of eligibility is drawn in a series of decisions that, taken together, reveal a narrow and treacherous path between the cliffs of abstraction and the sea of generic implementation.

The cases that said yes

Enfish, LLC v. Microsoft Corp., 822 F.3d 1327 (Fed. Cir. 2016), is the most important post-Alice decision for software patents, and if you are a patent practitioner who has not read it carefully, you should stop reading this blog post and go read Enfish instead. I'll wait.[^7]

[^7]: You're back? Good. The claims in Enfish were directed to a "self-referential" database table—a data structure that stored all data entities in a single table with column definitions provided by rows in the same table. The district court had held these claims abstract, because of course it had; by 2016, district courts were holding everything abstract. The Federal Circuit reversed, and in doing so established the principle that would become the primary survival strategy for software patents: claims directed to "an improvement in computer capabilities" rather than "economic or other tasks for which a computer is used in its ordinary capacity" are not directed to abstract ideas at all. They are eligible at Step One. The self-referential table provided faster search times, more efficient storage, smaller memory requirements, and on-the-fly configurability. These were improvements to how the computer worked, not improvements to some external business process that happened to use a computer.

The Enfish court made a point that cannot be emphasized enough: improvements to computer functionality need not be defined by reference to "physical" components. Software-only improvements qualify. This was the lifeline. This was the narrow bridge across the abyss. And the entire subsequent decade of software patent prosecution has been, in essence, an extended exercise in trying to walk across that bridge without falling off. (For a detailed walkthrough of how these cases translate into actual drafting decisions—with before-and-after claim examples showing what the USPTO accepts and rejects—see Outlier Patent Attorneys' guide to software patent eligibility, which is one of the clearest practical treatments of this material the author has encountered.)

McRO, Inc. v. Bandai Namco Games, 837 F.3d 1299 (Fed. Cir. 2016), extended the bridge. Claims to automatically generating lip synchronization using specific rules that evaluated subsequences of phonemes were eligible because the automation employed specific, unconventional technical means that achieved results impossible through the prior manual process. This was not "doing it on a computer"—it was doing something only a computer could do, in a way that improved the technology itself.

Finjan, Inc. v. Blue Coat Systems, 879 F.3d 1299 (Fed. Cir. 2018), validated security software claims that generated a new kind of data object—a behavior-based security profile—with capabilities no prior art system possessed. And Contour IP Holding v. GoPro, 113 F.4th 1373 (Fed. Cir. 2024), the first Federal Circuit eligibility reversal in nearly three years, found claims to dual-stream POV cameras eligible as a specific means of improving relevant technology. (The Contour decision is particularly significant for AR/VR companies, where software-hardware integration claims face § 101 challenges at every turn. If you are building in spatial computing, Outlier Patent Attorneys has a comprehensive AR/VR patent strategy guide that maps the entire landscape—from who owns what, to how to get software-driven rendering and tracking claims past Alice, to the NPE litigation surge that spiked 47% in early 2025.)

The cases that said no, and meant it

Berkheimer v. HP Inc., 881 F.3d 1360 (Fed. Cir. 2018), was a mixed blessing. It held that whether claim elements are well-understood, routine, and conventional is a question of fact—meaning you could survive summary judgment by raising a genuine dispute. But it also confirmed that parsing, comparing, and storing data can constitute an abstract idea. District court invalidation rates under Alice dropped from approximately 69% to 46% after Berkheimer. A victory, of the modest and qualified kind.

American Axle & Manufacturing v. Neapco Holdings, 967 F.3d 1285 (Fed. Cir. 2020), was the case that broke the system. Claims to manufacturing propeller shafts by "tuning" a liner to attenuate vibrations were held ineligible as directed to Hooke's law—a natural law. The Federal Circuit denied en banc rehearing in a 6-6 split, with every single judge calling for Supreme Court clarification. Every. Single. Judge. The Solicitor General urged the Court to take the case. The Court declined, on June 30, 2022, without explanation.

Which brings us to 2025 and the decision that matters most for AI: Recentive Analytics v. Fox Corp., 134 F.4th 1205 (Fed. Cir. 2025). The court held, with devastating clarity, that claims that merely apply established machine learning methods to a new data environment—without disclosing improvements to the machine learning models themselves—are patent ineligible. Simply training a model on new data and letting it optimize iteratively is "incident to the very nature of machine learning." You haven't improved the technology; you've merely used it.[^8]

[^8]: The implications of Recentive for the AI industry are, to use a technical term, enormous. A very large number of AI patent applications currently pending at the USPTO consist of claims that recite, in various levels of specificity, the application of known machine learning techniques—random forests, gradient boosting, convolutional neural networks, transformer architectures—to new domains. After Recentive, those claims are in trouble. The claims that will survive are those that disclose how the model itself is improved: novel architectures, specific training techniques that prevent catastrophic forgetting, particular data preprocessing methods that achieve documented technical advantages. The difference between "we trained a neural network to do X" and "we modified the neural network's architecture in the following specific ways, achieving the following documented improvements" is, post-Recentive, the difference between an invalid patent and a valid one. (For a comprehensive treatment of the post-Recentive, post-Desjardins landscape and how to navigate it—including the inventorship, disclosure, and open-source considerations that compound the eligibility question—Outlier Patent Attorneys' 2026 guide to patenting AI technologies is the most thorough practitioner-oriented resource the author has found.)

VI. The 2025 Plot Twist: Director Squires and the Great Softening

Now. If you have read this far—and the author is genuinely grateful if you have, given the density of the foregoing material and the general tendency of human beings to prefer reading literally anything else—you might reasonably conclude that software patent eligibility is a wasteland from which no hope can emerge. You would be wrong. Or at least, you would be incompletely right.

Because in 2025, under Director John Squires, the USPTO did something unexpected: it started pushing back against § 101 rejections, not from the outside as a lobbying effort, but from within, as a matter of official policy and precedent.

The most significant development is Ex parte Desjardins, an Appeals Review Panel decision authored by Director Squires himself and designated precedential on November 4, 2025. The claims at issue involved training a machine learning model on multiple tasks sequentially while preserving knowledge from previously learned tasks—addressing what ML researchers call "catastrophic forgetting." The ARP found that while the claims recited mathematical concepts at Step 2A Prong One, they integrated those concepts into a practical application at Prong Two: the claimed technique reduced storage requirements, reduced system complexity, and preserved prior task knowledge in ways that represented specific, documented improvements to how the ML system actually functioned.

Director Squires's opinion contained language that, in the context of § 101 jurisprudence, qualifies as revolutionary: he cautioned that categorically excluding AI innovations from patent protection "jeopardizes America's leadership in this critical emerging technology" and emphasized that §§ 102, 103, and 112—novelty, obviousness, and written description—"are the traditional and appropriate tools to limit patent protection to its proper scope." This is, in plain English, the Director of the USPTO saying: stop using § 101 as a catch-all filter for patent quality. That's not what it's for. Use the tools that are designed for quality. Use 101 only for what 101 is supposed to do.

The policy apparatus followed. The August 4, 2025 memorandum from Deputy Commissioner Charles Kim told examiners in the most § 101-affected Technology Centers (2100, 2600, and 3600) that rejections should only issue when it is "more likely than not" that the claim is ineligible—a heightened evidentiary threshold. The memo further clarified that claims merely involving mathematical concepts without explicitly reciting them remain eligible, and that AI limitations that cannot practically be performed in the human mind do not qualify as mental processes. That last point is critical: a claim that requires training a neural network with billions of parameters on terabytes of data is not, by any meaningful definition, a "mental process." The August 2025 memo made this explicit.

Then came the December 2025 SMED (Subject Matter Eligibility Declaration) guidance, which created a new evidentiary tool: applicants can now submit Rule 132 declarations providing objective evidence—including performance benchmarks, comparison data, and expert testimony—that their invention constitutes a genuine technological improvement. Think of it as a way to put facts in front of the examiner rather than relying solely on legal argument. You can now say, with evidentiary support: "Here. Our system processes queries 40% faster with 30% less memory usage. Here is the data. This is not abstract."

The PTAB reversal rates responded accordingly. From a historical average of 8-12% reversal of examiner § 101 rejections, the rate rose to 18% in October 2025 and 29% in November 2025. The PTAB also substantially curtailed sua sponte § 101 new grounds of rejection—cases where the Board raised eligibility issues the examiner never raised, which had previously appeared in roughly 9-10% of non-101 appeals.

This is, by the standards of patent administrative law, a seismic shift.

But—and there is always a "but" in this story, because this story is essentially a 30-year "but"—the Federal Circuit is not bound by any of this.

VII. Practical Strategies for Not Getting Killed, or: What You Can Actually Do

Billy Pilgrim has come unstuck in time. Your patent claims, if you're not careful, will come unstuck from eligibility. Here is how to prevent that, drawing on the case law patterns, the current examination guidance, and the accumulated scar tissue of a decade of post-Alice prosecution.[^9]

[^9]: The author notes that none of the following constitutes legal advice, which is something lawyers are required to say even when the content that follows is obviously designed to help people make legal decisions. If you are drafting or prosecuting software patents, retain a qualified patent attorney—and ideally one who actually understands startup economics and technology, rather than a big-firm generalist who will bill you $40,000 for a patent that protects the wrong thing. (Outlier Patent Attorneys has a data-driven argument for why startups should file patents—and why most startup patent experiences are terrible not because patents are bad but because the traditional law firm model is misaligned with how startups operate. It is the most candid thing a patent firm has ever published about its own industry, and it is worth reading even if you never hire them.) If you cannot afford a qualified patent attorney, you probably cannot afford to file a patent application, and you should think carefully about whether provisional applications, trade secrets, or first-mover advantage might be more appropriate intellectual property strategies for your current stage.

Frame everything around the technical improvement

This is the Enfish principle, and it is the single most important thing in software patent prosecution. Your claims must be about how the computer does its job better—not about the business result the computer achieves. The question is not "what does the system do?" but "what does the system do differently, technically, that makes the technology itself work better?"

Enfish's self-referential table. McRO's phoneme subsequence rules. Finjan's behavior-based security profile. Ex parte Desjardins's continual-learning training method. These all succeeded because they claimed the specific technical mechanism, not the desired outcome. They said: "here is a new data structure / algorithm / training technique / computational method that makes the system faster / more accurate / more efficient / more capable." They did not say: "here is a computer that does a business thing."

For AI claims specifically, after Recentive v. Fox, the line is drawn with uncommon clarity: you must disclose improvements to the ML models themselves, not just the application of conventional ML to new data. Novel architectures, specific training techniques, particular data processing methods that achieve documented improvements. "We trained a neural network to optimize TV schedules" is abstract. "We modified the training pipeline to implement a [specific technique that prevents catastrophic forgetting], achieving measurable improvements in [specific metrics]" has a fighting chance.

The specification is your lifeline—write it accordingly

Your specification must contain a dedicated section describing: (a) the technical problem in the prior art, in technical terms; (b) the specific technical solution your invention provides; and (c) measurable, documented improvements—processing speed, memory usage, accuracy, latency, storage requirements. If you cannot articulate the technical improvement, you do not have a § 101-eligible claim; you have a wish dressed up in claim language.

The Federal Circuit in Affinity Labs v. DirecTV found claims ineligible specifically because the specification failed to provide details about how the invention accomplished the alleged improvement. In Intellectual Ventures I v. Symantec, the patent owner argued technological improvement, but the court found the claims lacked any limitations addressing the described improvements. The lesson is painful but clear: the claims themselves must include the components or steps that provide the improvement. You cannot claim the result and describe the mechanism only in the specification.

The new SMED framework makes the specification even more important because it enables you to submit declarations backed by the technical evidence your specification documents. A specification that says "our system is faster" gives you nothing. A specification that says "our system processed 10,000 queries in 4.2 seconds compared to 11.7 seconds for the nearest prior art system, as documented in the following benchmark tests" gives you a SMED.

Mind the classification

This is tactical rather than doctrinal, but it matters. The USPTO's automated classification systems route applications to art units based on terminology in the title, abstract, and claims. Applications landing in Technology Center 3600 face dramatically worse outcomes than those in TC 2100 or TC 2400. If your invention has both technical and business aspects—as many software inventions do—lead with the technical framing. A title like "System and Method for Optimizing E-Commerce Revenue Through Machine Learning" will route to a very different art unit than "Machine Learning Architecture with Reduced Memory Overhead for Sequential Task Processing." Same underlying technology, perhaps. Very different prosecution experience.

The PTAB appeal calculus has changed

Historically, appealing a § 101 rejection to the PTAB was an exercise in expensive futility—88% affirmance rates do not inspire confidence. But the 2025 reversal rate shift changes the arithmetic. The doubling of reversal rates under Director Squires means appeals are now a viable strategy, particularly for AI and ML claims that can leverage the precedential Ex parte Desjardins decision.

The most effective appeal arguments: (a) the examiner characterized claims at an impermissibly high level of abstraction (cite Enfish); (b) the examiner failed to evaluate claims "as a whole" (cite Diamond v. Diehr); (c) the rejection did not meet the "more likely than not" standard from the August 2025 memo. Filing a SMED with the appeal brief provides factual grounding that the Board increasingly expects.

One procedural note: the AFCP 2.0 program expired on December 14, 2024. After-final prosecution now relies on standard after-final amendments under 37 CFR § 1.116, pre-amendment examiner interviews, and RCEs. Budget accordingly.

VIII. PERA, or: The Legislative Cavalry That May or May Not Be Coming

There is a piece of legislation making its way through Congress that would, if enacted, constitute the most significant change to patent eligibility law since the Patent Act of 1952. It is called the Patent Eligibility Restoration Act, and it has been introduced, debated, and not-quite-passed in every Congress since 2022, like a bill trapped in its own version of Groundhog Day.

The current version—S. 1546 / H.R. 3152, introduced May 1, 2025, by Senators Tillis (R-NC) and Coons (D-DE) and Representatives Kiley and Peters—would eliminate all judicially-created exceptions to patent eligibility and replace them with five narrow statutory exclusions: (1) standalone mathematical formulas; (2) processes that are substantially economic, financial, business, social, cultural, or artistic; (3) mental processes performed solely in the human mind; (4) unmodified human genes as found in the body; and (5) unmodified natural materials as found in nature. There is a critical safe harbor: any process that "cannot practically be performed without the use of a machine or manufacture" shall be eligible. The bill would also prohibit conflating eligibility with novelty, obviousness, or disclosure requirements.

The Senate Judiciary Subcommittee held a hearing on October 8, 2025. The stakeholder divide is, to use a word that does not do justice to the depth of the disagreement, deep.

On the pro-PERA side: the life sciences industry, the pharmaceutical industry, patent bar associations like the AIPLA and IPO, organizations like BIO and C4IP (led by former USPTO Directors Iancu and Kappos), and individual inventors who have watched their patents get eviscerated by Alice challenges. Their argument: the current framework is unpredictable, inconsistent, and chills investment in innovation by making it impossible to know in advance whether a given invention is patentable.

On the anti-PERA side: large technology companies represented by the High Tech Inventors Alliance, the SIIA, the Electronic Frontier Foundation, the ACLU, and a coalition of 17 IP law professors. Their argument: PERA would revive the worst software and business method patents, empower patent trolls, threaten open-source software, and potentially allow patenting of human genes. The retail industry fears revived business method patents. Generic drug manufacturers warn of expanded patent thickets that would delay generic competition.[^10] The open-source concern is particularly acute for AI startups, most of which build on open-source frameworks—PyTorch, Hugging Face models, various pretrained architectures—raising thorny questions about what you can actually patent when your foundation is open-source code, and how copyleft licenses interact with patent claims. (Outlier Patent Attorneys has a detailed analysis of the patent-and-open-source intersection that is essential reading for anyone in this position.) The entanglement is even more fraught in regulated industries: if you are building digital health products on open-source components, you are not just risking IP loss through copyleft license obligations—you are also introducing regulatory complications around software provenance, FDA compliance, and HIPAA data security that can torpedo an otherwise viable product. (Their companion piece on the hidden pitfalls of open source in digital health covers the specific ways this goes wrong, including the scenario where a copyleft-licensed component forces you to open-source your proprietary algorithm during a regulatory audit. It is the kind of thing that sounds paranoid until it happens to you.)

[^10]: The irony of the technology industry opposing PERA deserves a moment of contemplation. The same companies that hold vast patent portfolios and aggressively assert those patents against competitors are arguing that stronger patent eligibility would be bad for innovation. The cynic's explanation is that large tech companies benefit from the current § 101 regime because it allows them to invalidate competitors' patents while their own portfolios—drafted with the resources of well-funded legal departments—tend to survive. The non-cynic's explanation is that these companies genuinely believe broad patent eligibility would empower non-practicing entities and chill open-source development. Both explanations are probably partially true, which is the most unsatisfying kind of truth. What is unambiguously true is that even companies with massive patent portfolios can get the strategy wrong—filing patents that protect yesterday's business model while the actual competitive frontier moves somewhere else entirely. Outlier Patent Attorneys' case study of Spotify's patent strategy is a masterful dissection of exactly this failure mode: a company that patented its music streaming technology while its economic future pivoted to advertising, leaving the thing that actually needed a moat unprotected. If you are a founder or in-house counsel, it is the kind of cautionary tale that should inform every portfolio allocation decision you make.

As of March 2026, PERA has not passed. Senator Tillis's announced retirement creates urgency but also uncertainty about continued legislative momentum. The Senate Judiciary Committee markup produced amendments but no floor vote. Prospects are assessed at roughly 50-50 at best. The Supreme Court has denied every cert petition on § 101 in the last decade, and only Justice Kavanaugh has publicly indicated willingness to grant one.

IX. The Author's Attempt in Finding Meaning in the Morass

Here is what I know.

I know that a nine-word statute, written in 1952, is being used to adjudicate the patentability of technologies that would have been indistinguishable from magic to the people who wrote it. I know that the Supreme Court created a test for this adjudication and then declined, repeatedly, to clarify it. I know that the Federal Circuit has applied this unclarified test to hundreds of cases and achieved results that its own judges—in opinions, in dissents, in concurrences that read like cris de coeur—have described as incoherent.

I know that the USPTO, under Director Squires, is trying to create a more navigable framework within the constraints of case law it cannot override. I know that Ex parte Desjardins and the SMED framework and the August 2025 memo represent genuine improvements at the examination stage. I know that the Federal Circuit's 95.5% invalidity rate in 2024 means those improvements may not survive post-grant challenge.

I know that the gap between what the USPTO allows and what the Federal Circuit upholds creates a kind of Schrödinger's patent—alive at the examination stage, dead in litigation, and existing in an indeterminate state of both until someone opens the box by filing a lawsuit.

And I know this: the single most important thing you can do, whether you are a patent attorney drafting claims or a startup founder deciding whether to invest in patent protection, is to understand that claim quality has never mattered more. The era of broad functional software claims is over. The era of "we do X on a computer" claims is over. What survives—what will survive both the USPTO and the Federal Circuit—are claims that identify, with technical specificity, what the invention does differently at the level of the technology itself. Not the business result. Not the user experience. The technical mechanism by which the system achieves a documented improvement over the prior art.

But before you even get to the drafting stage, the threshold question—should you patent this at all—deserves honest consideration. Patents require disclosure, they cost money, they take years to prosecute, and for some startups, trade secrets or first-mover advantage may be more strategically sound. The calculus depends on your technology, your competitive landscape, and your fundraising trajectory. (Outlier Patent Attorneys has a useful framework for weighing the advantages and disadvantages of patents for startups, including the empirical data on how patents correlate with venture funding—data that is worth examining before you commit to a prosecution strategy.) For those who want the hard numbers—and by "hard" the author means peer-reviewed, causally identified, Harvard-Business-School-grade hard—their deep dive into the empirical research on patents and startup outcomes is the most rigorous thing you will read on this subject: startups that obtained a first patent grew 55% faster in employment and 80% faster in sales, and the effect was strongest for first-time founders without prior investor backing, which is to say, the people who need the signal most. And if the question is not whether to patent but what the patent is actually worth—in a fundraise, in an M&A transaction, in litigation, as a line item on a balance sheet that a CFO has to defend to a board—their guide to patent valuations covers six valuation methodologies, the Georgia-Pacific framework, and the specific ways that § 101 eligibility risk discounts the value of software patents in every context where money changes hands. It is, in the author's admittedly non-expert opinion, the best single treatment of this question currently available.

That is the narrow path. It is real. It is navigable. It requires technical understanding, careful drafting, and the kind of attention to detail that makes the difference between a patent that holds up and a patent that doesn't. But it exists.

Whether Congress will widen that path through PERA, whether the Supreme Court will eventually take a § 101 case, whether the Federal Circuit will moderate its approach—these are open questions. They have been open questions for a decade. They may be open questions for another decade.

In the meantime, we build on the ground we have.