Table of Contents

Patent Litigation Intelligence Report Ranking Methodology 2024

In line with our prior reports covering CAFC, PTAB Litigation, Patent Prosecution, Trademark Prosecution, International Trade Commission Section 337 Investigations, and Abbreviated New Drug Applications Intelligence, we engaged with stakeholders and our IP community to identify the fairest and efficient methods for evaluating the activity and performance of companies, attorneys, law firms, and judges involved in district court patent litigation. We continually assess and refine our ranking methodologies and formulas based on growing experience and feedback from the legal community. Consequently, this latest report incorporates several enhancements in scoring and ranking, detailed further below.

General Considerations 

Before delving into the specifics of our ranking methodology, it’s assumed that readers possess a foundational understanding of the patent litigation landscape. Questions often arising in patent litigation strategies involve decisions about U.S. District Court filing locations, venue selection, responses to defendant challenges before the Patent Trial and Appeal Board, and considerations regarding the use of International Trade Commission Section 337 in conjunction with district court filings (e.g., see Patexia Insight 44). Our comprehensive IP Insight Intelligence Reports have covered various patent litigations, including PTAB proceedings, ITC Section 337 Investigations, ANDA, and subsequent appeals to CAFC. However, for the scope of this report, our focus remains exclusively on U.S. District Court Patent Litigation. In the realm of U.S. District Court Patent Litigation, a patent case might be initiated by a plaintiff aiming for injunctive relief or damages. This occurs when a defendant is alleged to manufacture, import, sell, or employ products that potentially infringe upon patents relevant to the plaintiff’s jurisdiction. 

Scoring and Refining PACER Statuses Classification  

The terminated patent litigation cases during the period of our study are represented using the official PACER (Public Access to Court Electronic Records) status classifications. These statuses provide some insights into the outcome of the patent cases examined within this report (refer to Table 7.1) and form the foundational framework for comprehending and interpreting the diverse conclusions and outcomes of these extensive patent litigations. 

Table 7.1 – PACER Outcome Classifications for patent cases

Listed PACER Outcome
Judgment – Judgment on Consent
Judgment – Motion Before Trial
Judgment – Court Trial
Judgment – Judgment on Default
Judgment – Other
Dismissed – Voluntarily
Dismissed – Settled
Dismissed – Other
Dismissed – Lack of Jurisdiction
Dismissed – Want of Prosecution
Transfer/Remand – MDL Transfer
Statistical Closing
Non-reportable Closing

Despite the diverse PACER statuses attached to patent cases filed between July 1, 2018, and June 30, 2023, several of these PACER outcome classifications don’t accurately reflect the actual case outcomes or which party emerged victorious after case termination. For these reasons, we needed to create a comprehensive scoring or point system that could be equitably and uniformly applied to all parties involved by:

  • Manually reviewing case judgment decisions encompassing various PACER categories.
  • Reassessing, reclassifying, and segmenting official PACER outcomes into a more practical classification and point system. This system aims to more precisely represent the practical outcomes of each case for the involved parties, their legal representatives, and judges.

The scoring system resulting from these efforts, also constructed with input from our esteemed legal community, is outlined in Table 7.2, accompanied by an explanatory section. These insights into scoring methodology from our network of attorneys, offer invaluable perspectives on the practicalities of litigating patent cases. We extend our sincere gratitude to our attorney community for their invaluable time and input.

Table 7.2 – Patent Case Outcomes and Scores Applied

OutcomePlaintiffDefendantPlain. Atty/FirmDef. Atty/FirmJudge
Judgment – Defendant Wins01010
Judgment – Plaintiff Wins10101
Judgment – Settlement (Confidential)0.50.50.50.5
Judgment – Partial Win for Both0.50.50.50.50.5
Judgment – Voluntarily Dismissed by Party(ies)0.250.750.250.75
Dismissed – Voluntarily0.250.750.250.75
Dismissed – Settled (no IPR petition)0.50.50.50.5
Dismissed – Settled (IPR denied)0.750.250.750.25
Dismissed – Settled (IPR, settled pre-Institution)0.250.750.250.75
Dismissed – Settled (IPR, settled post-Institution, pre-trial)0.250.750.250.75
Dismissed – Settled (IPR, < 50% claims survive)0.250.750.250.75
Dismissed – Settled (IPR, > 50% claims survive)0.750.250.750.25
Dismissed – Other0.250.750.250.75
Dismissed – Lack of Jurisdiction0101
Dismissed – Want of Prosecution0101
Transfer/Remand – MDL Transfer
Statistical Closing
Non-reportable closing

All five PACER “Judgment” categories (highlighted in yellow in Table 7.1) required manual review because none of these reflected consistently interpretable outcomes for the Plaintiff or Defendant based on PACER nomenclature alone. This resulted in a manual review of the documents for 1,277 cases within the five PACER Judgment categories that were filed between July 1, 2018, and June 30, 2023, and with a decision status available in PACER as of December 1, 2023 (the time of review).

The rest of the original PACER classifications didn’t require a manual review since:

  1. We were able to score cases fairly and consistently in these other categories
  2. The settlement terms often remain confidential, so even manual review often does not uncover additional information 
  3. The vast volume of cases terminated in these other categories during this time period made it impractical to manually review each one

Based on our manual review, we subdivided the PACER Judgment cases into five outcome classifications reflecting tangible outcomes (highlighted in yellow in Table 7.2). Any of the five original PACER Judgment classifications (highlighted in yellow in Table 7.1) were reassigned to these new Judgment outcomes (highlighted in yellow in Table 7.2). These outcome categories represented decisions or outcomes providing measureable benefits, a “win” for either the Plaintiff or Defendant. For these categories, we assigned a “score” or “point” based on our examination of case documents and subsequent evaluation, incorporating insights from our legal community experts, regarding the relative success and engagement of the litigants (companies) and other participants in the case (attorneys, firms, judges).

Judgment – Plaintiff or Defendant Wins: 

In any adjudicated case with a clearly stated judgment or decision favoring either the Plaintiff (such as infringement or injunction) or the Defendant (like non-infringement or a successful motion to dismiss), the Plaintiff received 1 point (Judgment – Plaintiff Wins) while their opponent received 0 points. The same applied vice versa: the Defendant received 1 point (Judgment – Defendant Wins) with the Plaintiff receiving 0 points. For these cases, the attorneys or law firms representing each party received identical scores since they contributed significantly to the case outcomes. Similarly, the participating judges were also scored based on decisions for the Plaintiff or Defendant, receiving 1 or 0 points, respectively. 

Judgment – Settlement (Confidential): 

Cases terminated explicitly through a confidential settlement agreement received a score of 0.5/0.5 for all parties involved (Judgment – Settlement (Confidential)). Since the terms of confidential settlements remained undisclosed, making it impossible to discern the prevailing party, an equal score was assigned to both litigants (judges were not scored). 

Judgment – Partial Win for Both: 

In numerous cases, multiple counts were presented in the complaints. While the majority of judgments favored one side across all counts, there were instances of mixed judgments. These cases saw the defendant prevailing in certain counts while the plaintiff succeeded in others. In instances where there was no unequivocal winner, we applied a neutral scoring of 0.5/0.5 for all parties involved. 

Judgment – Voluntarily Dismissed by Party(ies): 

Cases within the PACER Judgment category that concluded due to a voluntary motion to dismiss initiated by the plaintiff party were assessed with a 0.25/0.75 score for the Plaintiffs/Defendants exclusively (Judgment – Voluntarily Dismissed by Party(ies)). This scoring rationale stemmed from the presumption that, in most instances, the Plaintiffs might derive some tangible yet typically undisclosed advantage from the dismissal. Therefore, these cases were not scored as harshly for the Plaintiffs as an outcome deemed as a loss on the merits, which would have instead received a 0/1 score.

Dismissed – Voluntarily: 

The dismissal status is derived from our distinct original PACER Judgment category, which we retained as one of our newly reclassified categories (Table 7.2). To establish the scoring criteria for this category, we consulted our community of patent attorneys. These cases typically signify undisclosed settlements or agreements, where the plaintiff has likely secured some form of benefit. A majority of attorneys opined that Plaintiffs/Defendants should be scored 0.25/0.75, respectively, for this category, as indicated in Table 7.2.

Dismissed – Other:

The other scorable category that transitioned directly from the PACER classifications to our scoring system was labeled “Dismissed – Other.” Similar to “Dismissed – Voluntarily” mentioned earlier, we assigned a score of 0.25/0.75 for the Plaintiffs/Defendants and their legal representation (highlighted in red in Tables 7.1 and 7.2). These cases were usually dismissed after the plaintiff party filed a motion, but weren’t classified by PACER as voluntary dismissals. 

Three other categories were transferred directly from PACER, but were not scorable since they were simply administrative closures or different transfers  (green in Tables 7.1 and 7.2): 

  • Transfer/Remand – MDL Transfer
  • Statistical Closing
  • Non-reportable closing

Dismissed – Settled:

The remaining original PACER category yet to be addressed is “Dismissed – Settled” (highlighted in gray in Table 7.1). Through discussions and surveys involving expert attorneys in our esteemed legal community, these cases were further classified and scored, considering their connection with related inter-partes review (IPR) cases, as depicted in Table 7.2 (highlighted in gray).

As previously detailed in Patexia Insight 44, over 80% of IPR filings are conducted for defensive purposes, initiated directly by the defendant or indirectly by other entities aiming to invalidate the asserted patents (e.g., filed by Unified Patents or RPX Corporations). Consequently, the outcomes of these IPR proceedings could significantly influence negotiations between the involved parties and, ultimately, the terms of settlement. Hence, we assigned scores to all “Dismissed – Settled” cases based on the outcome of the corresponding IPR proceedings, as outlined below:

Dismissed – Settled (no IPR filed):

In cases where none of the patents involved were addressed by an IPR petition, we evenly divided the points between the two parties (0.5/0.5). 

Dismissed – Settled (IPR denied):

When an IPR is submitted but the institution is denied, it typically signifies that the panel of administrative judges at the PTAB deemed the challenged claims would likely endure the IPR scrutiny considering the prior art. Consequently, patent litigation case settlements in such instances may imply advantageous terms for the plaintiff, given the patent’s survival through the IPR process. In this scenario, we allocated points with a 0.75/0.25 split, favoring the plaintiff.

Dismissed – Settled (before the institution of IPR):

If cases got settled before the institution phase, there can be varied implications. While it might indicate that the patent holder (plaintiff) perceived a limited chance of success and aimed to minimize losses, it could also signify early negotiations that led to a royalty agreement with the defendant. Due to the absence of concrete evidence, drawing definitive conclusions becomes challenging. Initially, our inclination was to treat these cases similarly to other settlement agreements, simply splitting the point (0.5/0.5) between the plaintiff and defendant. However, after surveying respondents, there was a preference for a 0.25/0.75 split in favor of the defendant. Consequently, that became the scoring criterion adopted for this category. 

Dismissed – Settled (after the institution of IPR):

If a case settles after the IPR institution phase, it commonly signifies strong prior art, suggesting that the patent owner and their legal representation may expect an unfavorable outcome if they proceed with the IPR. In such scenarios, the settlement of the patent case might suggest more advantageous terms for the defendant. Consequently, we opted to allocate points with a split of (0.25/0.75) in favor of the defendant. 

Dismissed – Settled (less than 50% of the claims challenged in the IPR survived)

If in an Inter Partes Review (IPR), the majority of challenged claims are invalidated (i.e., less than 50% survive), we presume that the settlement terms likely leaned in favor of the defendant. Consequently, we distribute the points with a split of (0.25/0.75) for Plaintiff/Defendant.

Dismissed – Settled (more than 50% of the claims challenged in the IPR survived)

Once an Inter Partes Review (IPR) reaches its conclusion and obtains a final written decision, we assess all the challenged claims and determine the percentage of those claims that survived. If over 50% of the challenged claims survive, indicating a favorable outcome for the Plaintiff, we distribute the points with a split of 0.75/0.25 for Plaintiff/Defendant. This allocation reflects the likelihood that the patents remain valid and enforceable for the Plaintiff’s benefit.

Other Considerations

In most patent cases involving a single plaintiff and defendant, outcomes are straightforward. However, cases involving multiple parties can result in uniform outcomes among parties on the same side, and occasionally, they are divided administratively into individual cases, allowing us to score each party effectively.

While any scoring system may have inherent limitations due to the constraints of available case details and the inability to capture all case intricacies, our objective in all our Intelligence Reports, including this one, is to create a scoring methodology that ensures fairness and consistency across all involved parties. We aim to avoid unfairly penalizing or crediting participants when their direct influence on a specific outcome might be ambiguous or unknown. Our goal is to provide ample acknowledgment for each participant’s role, allowing them the opportunity to achieve their highest possible score.

In summary, after comprehensive assessments and feedback from our community experts gathered through surveys and discussions, the scoring metrics outlined in Table 7.2 were collectively agreed upon as the most suitable and fair approach. These metrics ensure consistent application across all participants, including companies, attorneys/firms, and judges, considering the available case information. Once the points were allocated as detailed in Table 7.2, we could establish Activity, Success, and Performance Scores and Rankings for each participant.

Automated Case Status Detection 

Numerous instances within the PACER system have highlighted cases marked with a termination date, indicating closure, but lacking the corresponding status value to indicate the outcome. To address this problem, an automated system was developed to detect case statuses by analyzing documents filed around the termination date. Leveraging Natural Language Processing (NLP) and various analytical techniques, this system examines the contextual information within multiple documents to accurately identify the case status. By reconciling the absence of statuses and rectifying cases where the displayed status contradicts the actual filed documents, this automated solution significantly improves the precision of termination data within PACER. This innovative approach aims to provide a more reliable representation of case statuses and outcomes, ensuring greater accuracy in legal record-keeping. 

Activity Score 

In our observations, we’ve noted instances where a single plaintiff with a limited number of patents has filed lawsuits against hundreds of companies. Interestingly, while having more defendants typically increases the workload for the plaintiff, this increased workload doesn’t scale proportionally with the number of cases filed using the same patents. Conversely, the number of patents involved directly correlates with the workload managed by each party or their representatives. Therefore, an entity’s level of activity during a specific period is contingent on two key factors: the count of unique cases and unique patents. Consequently, our Activity Score is calculated from both the total unique cases and unique patents filed within the duration of this report, in which the company, attorney, or firm acted as a Plaintiff, Defendant, or a combination of both. This score is computed as a weighted average of cases and patents involved. This approach, widely endorsed by our expert community, is considered the fairest method to assess an entity’s activity.

Moreover, we applied a higher weight to recent cases, employing a weighted scale (for example 1.0 for 2023 cases, 0.98 for 2022 cases, 0.96 for 2021 cases, and 0.94 for 2020 cases and so on). This adjustment accommodates scenarios where some firms may have been more active in past years than in the present, ensuring an equitable ranking system that reflects current activity levels. 

We also use a logarithmic scale to calculate the activity score, rather than a linear scale. This change was implemented to address the issue of large disparities in scores between consecutive firms or attorneys. A linear scale assigns scores based on a direct proportion to the value being measured. For example, a firm that has twice the number of cases as another firm would receive twice the score. However, in the patent litigation landscape, the number of cases filed can vary greatly among firms, and using a linear scale can result in large gaps in scores between firms. On the other hand, a logarithmic scale assigns scores based on the logarithm of the value being measured. This means that the difference in scores between firms will decrease as the number of cases increases. Using a logarithmic scale minimizes these large gaps in scores and provides a more accurate representation of the activity level for each firm or attorney. This new method of calculation allows for a more fair and accurate ranking of firms and attorneys based on their activity level in the patent litigation landscape.

Judges’ Activity Score is calculated the same way, except that it includes all cases over which they presided and is not separable by the number of Plaintiff or Defendant cases.

All participants received activity scores (“Activity Score” in the tables) and ranks (“Activity Rank” in the tables) based on the above model applied to the period from July 01, 2018, to June 30, 2023.

Success Score

Plaintiffs and defendants were also scored for the various case outcomes, as described above and shown in Table 7.2

The “Plaintiff Success” and “Defendant Success” scores are calculated by aggregating the total points earned in cases where entities acted as either Plaintiff or Defendant, respectively. These scores are then divided by the total number of scorable cases and expressed as a percentage (multiplied by 100). Similarly, the “Overall Success” score considers all cases collectively as Plaintiff or Defendant. Rankings (“Success Rank” in the Tables) were assigned based on these scores.

To enhance the accuracy of success rate calculations, we adopted a method that accounts for the likelihood of winning all cases rather than merely averaging success rates. This approach recognizes the higher difficulty faced by entities handling a larger volume of cases in achieving success across the board. Evaluating the total activity level of an entity allows us to not only consider the number of won cases but also gauge the probability of winning all cases. This refined calculation method provides a more nuanced understanding of an entity’s performance in the patent litigation landscape, particularly for entities handling a high volume of cases. By factoring in both success rates and the entity’s overall activity level, we present a fairer and more accurate assessment of firms and attorneys in the patent litigation domain.

Attorneys and firms were evaluated for Success in alignment with their client’s role (Plaintiff or Defendant). Their scores and rankings were determined based on scorable outcomes where they played a substantial role in influencing the case’s outcome (refer to Table 7.2).

For judges, scores were allocated based on adjudicated case outcomes in which they directly participated (as indicated in Table 7.2). Judges received a score of 0 for Defendant wins and 1 for Plaintiff wins. Consequently, a judge’s score closer to 1 implies a higher frequency of rulings favoring the Plaintiff.

Performance Score

Consistent with our recent Intelligence Reports, we have also integrated a Performance Score, which serves as a weighted average of the Activity Score and Success Score (i.e., the “win” percentage). It’s vital to acknowledge that sustained high activity can potentially temper performance over time, adhering to the law of averages. Consequently, comparing the success scores of entities with vastly different workloads might not yield an accurate evaluation. By considering these dynamics, the “Performance Score” allows for a comprehensive assessment, factoring in both activity and success to rank companies, attorneys, and law firms effectively. This additional metric proves invaluable in identifying attorneys and law firms that exhibit not only a considerable level of activity in handling patent cases but also boast a high success rate in litigation. Essentially, it discerns entities that are both experienced and successful in this legal domain.

Normalization Using Machine Learning

Assessing success in the domain of patent litigation is a complex endeavor. It involves navigating variables such as legal arguments, judge decisions, specific patents, technology nuances, and the legal prowess of involved attorneys, firms and cmpanies. Evaluating individual entities’ success—be they attorneys, law firms, or companies—within a single case can be exceedingly challenging due to the interconnected nature of these factors. To address this complexity, we leverage vast datasets from diverse patent litigation database and deploy sophisticated computational techniques and machine learning algorithms for a rigorous and comprehensive analysis of individual performance. 

Our method for evaluating Success Scores for patent litigation stakeholders prioritizes fairness and precision. To mitigate the impact of external influences, such as the success of other entities or judicial discretion, we’ve developed a robust analytical model grounded in cutting-edge machine learning methodologies. This model effectively isolates each entity’s performance—whether a company, attorney, or law firm—from external biases, ensuring a more refined and impartial assessment of individual success within the complex patent litigation landscape. In practice, we employ advanced analytical techniques, making several adjustments to recalibrate the scores of all involved parties and their representatives, accounting for the various factors that can sway the outcome of patent litigation cases. This approach enables a more precise and equitable evaluation of individual success within the intricate CAFC landscape, offering a comprehensive understanding of their performance. 

It’s essential to emphasize that none of these scoring metrics are designed to, nor should they be interpreted as, a reflection of the quality, experience, capability, or impartiality of any firm, attorney, company, or judge. Just as even the most skilled doctors may occasionally lose patients, attorneys can also lose cases, sometimes due to factors beyond their control. Moreover, attorneys might undertake cases they are more likely to lose as part of their professional duties. We strongly advocate for clients seeking a firm or attorney to conduct their due diligence, analysis, and interviews. This empowers clients to base their selection on which firm or attorney best meets the unique requirements and needs of each individual case. 

This article was helpful?
Happy Neutral Sad
Thanks for letting us know!
Still need some help? Contact Our Support