Table of Contents

CAFC Litigation Intelligence Report Ranking Methodology 2024

Much like our previous ANDA, IPR, Patent Prosecution, Patent Litigation, Trademark, and ITC Intelligence reports, we have actively engaged with the CAFC community to refine and develop the most equitable methods for assessing the performance of companies, attorneys, law firms, and judges in CAFC cases. Our ranking techniques and formulas are under continuous assessment, informed by feedback, surveys, and evolutionary improvements with each successive report. In the upcoming section, we will detail the rationale and methodology behind the scoring used in this report, along with highlights of the updates compared to our previous 2023 report.

General Considerations

We’ll assume our readers have a working knowledge of the CAFC landscape. Put simply, the United States Court of Appeals for the Federal Circuit is a federal appellate court with jurisdiction over certain specific types of cases. It primarily handles appeals related to patents and specific civil cases originating from entities like the U.S. International Trade Commission, district courts, and the Patent Trial and Appeal Board. The final point of appeal beyond the CAFC is the United States Supreme Court. The time it takes for the Federal Circuit to render a verdict can vary, typically ranging from one month for unprecedented decisions to three to four months for decisions who will establish precedents (with some exceptions). The primary outcomes of the appeals process are as follows:

Affirmed – Affirming the decision of the trial court, meaning that the verdict at trial stands.
Reversed – The Appellate Court decides that the lower court’s decision was wrong, and a new trial may be ordered.
Remanded – The case, or part of it, goes back to the lower court to be reheard.

The CAFC cases analyzed in this report are represented by the following outcome classifications (Table 1).

Table 1 – CAFC Outcomes and Scores Applied

OutcomeAppellantAppelleeAppellant Atty/FirmAppellee Atty/FirmJudge
Affirmed01010
In-Part Outcomes (Affirmed-in-Part, Vacated-in-Part, Reversed-in-Part, etc.)0.50.50.50.50.5
Vacated and/or Remanded Outcomes0.750.250.750.250.75
Reversed10101
Dismissed/WithdrawnN/AN/AN/AN/AN/A
Transferred/ConsolidatedN/AN/AN/AN/AN/A

Table Notes: As illustrated in the table, we examined the various results of CAFC trials, categorizing them based on the type of opinion or order, and assigned scores to each outcome category. Among all the categories of case closures depicted in Table 1, four out of six are eligible for scoring. In cases where “N/A” appears in any columns or rows, it indicates that the corresponding participant was not evaluated for that specific outcome. Further elaboration can be found in the accompanying text.

In our analysis of CAFC cases terminated between January 1, 2019, and December 31, 2023, we encountered a subset of cases where it was challenging to determine a clear and quantifiable benefit (“win”) for either the Appellant or Appellee. In such instances, assigning a “score” or “point,” as we have traditionally done in previous reports, proved difficult. These scenarios often involved technical closures or terminations resulting from case transfers, withdrawals, or dismissals.

Through a thorough examination of individual CAFC cases and consultations with CAFC attorneys, we recognized that the outcomes of partial decisions, such as “Affirmed-in-Part, Vacated-in-Part, Remanded-in-Part,” were not easily distinguishable. Determining which side had the upper hand in the CAFC appeal was often a complex matter. In contrast, other categories, such as ‘Affirmed,’ ‘Reversed,’ ‘Vacated/Remanded,’ were straightforward, allowing for the allocation of points.

Many cases categorized under “in-part judgment” required further scrutiny. To streamline the scoring process, we assumed that both parties contributed to the final decision by meeting somewhere in the middle. Consequently, we divided the point in half for cases of this nature. As a result, 0.5 points were allocated to both the appellant and appellee sides (as seen in Table 1) since each party played a role in shaping the case’s ultimate outcome.

Our approach to determining how each case should be “scored” was straightforward: We posed a simple question—does the case’s outcome (opinion and order) leave the Appellant in a better position than they would have been under the original ruling? If the answer was affirmative, we assigned a score of 1 point to the Appellant. Conversely, if the outcome did not favor the Appellant, we awarded 1 point to the Appellee. This methodology allowed us to fairly evaluate the results of each case and assign points accordingly.

Conversely, for cases with affirmed decisions, the consensus among surveyed attorneys was to consider them as victories for the Appellee, and for reversed decisions, the consensus was to count them as wins for the Appellant.

Additionally, we addressed the ‘Vacated and/or Remanded’ decisions, acknowledging that these outcomes can be open to varying interpretations and complexities. From a scoring perspective, based on discussions with CAFC attorneys, these were generally seen as more favorable results for the appellants, as they have the potential to bring about substantial or partial changes to the case’s outcome. While it is recognized that the ultimate winner might depend on the specific circumstances, including the portion of the decision that was vacated, we believed this approach allowed for numerical evaluation with a consistent method.

It’s important to acknowledge that any scoring system will inherently have limitations, primarily stemming from the constraints of available case information and the inability to fully capture intricate case details, including legal strategies or confidential terms. While these inherent constraints cannot be entirely eliminated, our objective in this report is to establish a scoring methodology that can be consistently and fairly applied to all parties, given the practical boundaries of accessible information, time, and resources. Therefore, we have aimed to avoid undue penalties or credits where a participant may not have had direct influence over a specific outcome, or where crucial details and circumstances impacting the outcome (and thus the score) are ambiguous or unknown. Our approach has also been designed to allow each participant the opportunity to achieve their highest possible score by providing as much “partial credit” as possible.

In summary, these scoring metrics represent the most equitable approach to ensure fairness and consistency across all participants, including companies, attorneys/firms, and judges, given the available case information. 

Activity Score

The participation in CAFC cases by companies, their representing attorneys and law firms, and judges was measured through Activity Scores, which encompassed their involvement as appellants, appellees, or a combination of both. The Activity Score, denoting the count of CAFC cases, was calculated based on the number of cases within this report’s covered dates in which a company, attorney, or firm acted as the Appellant (Appellant Cases), the Appellee (Appellee Cases), or both (Overall Cases). Judges’ Activity Scores were calculated in the same manner and included all cases over which they presided, without distinction between Appellant and Appellee cases.

All participants were assigned Activity Scores (referred to as “Cases” in the Tables) and corresponding ranks (“Activity Rank” in the Tables) based on the total number of CAFC cases they were involved in, filed between January 1, 2019, and December 31, 2023. Success and Performance scores (explained below) were determined by examining the participants’ CAFC cases filed during this same period, with decision statuses or outcomes as of September 1, 2024.

To provide a more accurate representation of an entity’s recent activity, we implemented a weighted system for cases from different years. In response to feedback from the CAFC community, which emphasized the importance of recent activity, we reduced the weight of older cases in the activity score. This adjustment allowed a participant with five cases in 2023 to potentially achieve a higher rank than another participant with an equal number of cases distributed across the entire study period.

To address potential disparities between entities with varying case volumes, we computed the activity score using a logarithmic function. This approach facilitates more straightforward comparisons between entities with differing activity levels. 

Success Score 

Appellants and Appellees received scores based on various CAFC outcomes, as detailed above and presented in Table 1. The “Appellant Success” score is determined by adding up all points (representing wins or partial wins in the case of partial victories, as indicated in Table 1) for cases in which the entity acted as the Appellant. This total is then divided by the number of scorable cases and multiplied by 100 to express it as a percentage. This calculation is applied to companies, attorneys, and law firms. The “Appellee” score is computed and applied in the same manner, considering cases where the entity served as the Appellee. Likewise, the “Overall Success” score is calculated in the same manner, encompassing all cases where the entity acted as either the Appellant or the Appellee.

Attorneys and law firms often have a significant influence on the outcomes of cases, as they represent their clients and actively participate in legal proceedings. Consequently, they are evaluated and ranked for Success using the same methodology as their clients (Appellees or Appellants) for all scorable outcomes. 

Judges received scores based on their involvement in cases and across all categories listed in Table 1, where they had a direct influence on the outcome. To assign these scores, judges were rated with a 0 for Appellee wins, 1 for Appellant wins, and 0.5 for partial wins. This means that the closer a judge’s score is to 1, the more frequently they ruled in favor of the Appellant.

In our effort to be inclusive and account for participants who may have been involved in only a limited number of cases, we calculated the Success Score for all attorneys, law firms, or companies with at least one terminated and scorable case, as indicated in Table 1.

Normalization Using Machine Learning 

Assessing individual success in the realm of CAFC appeals is a multifaceted undertaking. Numerous complex variables come into play, including the legal arguments presented, judge decisions, specific patents and technologies, and the legal acumen of the involved attorneys. Evaluating the success of entities—whether they are attorneys, law firms, or companies—in a single case can be exceptionally challenging. The interconnected nature of these factors makes it difficult to isolate an individual’s impact on a case’s outcome. However, by harnessing extensive datasets from various CAFC cases and employing advanced computational techniques and machine learning algorithms, we conduct a rigorous mathematical analysis of individual performance across diverse scenarios.

Our approach to evaluating Success Scores for CAFC participants is rooted in fairness and precision. To reduce the influence of external factors, such as the success of other entities or the discretion of CAFC judges, we’ve developed a robust analytical model based on cutting-edge machine learning methodologies. This model effectively isolates each entity’s performance—whether it’s a company, attorney, or law firm—from external effects that could bias the analysis. As a result, we offer a more refined and impartial assessment of individual success within the complex CAFC landscape.

In practice, we employ advanced analytical techniques, making intricate adjustments to recalibrate the scores of all involved parties and their representatives, accounting for the various factors that can affect the outcome of CAFC cases. After obtaining these adjusted success scores for each participant, we calculate the average success score for each party or their representatives by considering their performance across all the CAFC cases in which they were involved. This advanced approach leads to a more precise and equitable evaluation of individual success within the intricate CAFC landscape, providing a comprehensive understanding of their performance.

Performance Score

As with our other IP Insight reports, the Performance Score serves as a weighted average of Activity and Success Scores. This additional scoring metric acknowledges what many clients seek in an attorney—a combination of substantial experience, reflected in high activity or caseload for relevant cases, and a strong track record of success in those cases. When case numbers are exceptionally high, it can naturally dilute performance scores over time, given the law of averages. This makes it less than ideal to compare the performance scores of firms and attorneys with significantly different caseloads. With the introduction of the “Performance Score,” we can now evaluate and rank companies, attorneys, and law firms based on the combined factors of Activity and Success. 

To conclude the methodology description, it’s important to emphasize that none of these scoring metrics should be interpreted as reflective of the quality, experience, ability, or impartiality of any law firm, attorney, company, or judge. Just as even the most skilled doctors may face unfavorable outcomes, attorneys can also encounter cases they do not win, often due to factors beyond their control. Additionally, there are instances where attorneys must accept cases with a higher likelihood of loss as part of their professional responsibilities. Therefore, we strongly recommend that clients seeking a law firm or attorney conduct their own due diligence, analysis, and interviews when making their selection. The choice should be based on which firm or attorney they believe is best suited to address the unique requirements and needs of each individual case.

This article was helpful?
Happy Neutral Sad
Thanks for letting us know!
Still need some help? Contact Our Support