Table of Contents

Section 337 ITC Intelligence Report Ranking Methodology 2024

Over time, our ranking methodology has evolved as we deepen our comprehension of the complexities within each IP practice area and the distinguishing features of one practice from another. In the following section, we will elaborate on our method for establishing a fair and relevant scoring system that assesses the activity, success, and performance of all participants in ITC Section 337 investigations. 

General Considerations

For the Activity Score and Ranking, we analyzed ITC violation investigations, a subset of all Section 337 Investigations (including Violation, Enforcement, Modification, Advisory, and Bond Forfeiture). Regarding the Success Score calculation, we focused solely on Terminated Violation Investigations, a subset of violation investigations.

The Performance Score and Ranking represent a weighted average of both Success and Activity Scores. We identified the best-performing entities as those with the highest success scores and participation in a larger number of investigations. Below, we will further cover each of these three scores: Success, Activity, and Performance, considering all stakeholders such as attorneys, law firms, respondents, complainants, and Administrative Law Judges (ALJ).

Activity Score

Previously, we determined activity by simply counting the total number of cases for an attorney, law firm, or company. However, it’s arguable that recent activity provides a better indicator of a law firm’s current level of engagement. For example, for 2024 activity ranking, a firm with five ITC investigations in 2018 should rank lower than one with five investigations in 2023. To address this, we use a weighted function that discounts older cases when calculating total activity. In essence, recent cases carry more weight than older ones, as a lack of recent activity may signify a slowdown or departure of relevant attorneys from a firm.

Subsequently, we normalize this calculated activity score to a range between 0 and 100 for simplicity and consistency with other scores. To address the considerable differences between different scores, we apply logarithmic calculations before scaling them to 100 (all success, performance, and activity scores are measured out of 100). 

Success Score

A terminated ITC investigation can yield various outcomes for respondents, sometimes differing even among respondents within the same case. Therefore, rather than assigning identical scores to all respondents (and their attorneys) for a single ITC case, the scoring methodology should ideally accommodate these diverse respondent outcomes, as we have endeavored to do here.

Moreover, some respondents may exit the case for various reasons before the adjudication of a 337 decision (e.g., Withdrawn, Settlement, Consent Order), while others may remain until a 337 decision is reached (No Violation, Violation-Settlement, Violation – Limited Exclusion Order (LEO) / General Exclusion Order (GEO) / Cease and Desist Order (CDO)). Additionally, some respondents may default or are not served, all of which may need consideration in the scoring process. 

Table 6.1 summarizes how we considered each of these outcomes and the scores we allocated to each of the parties and their representatives (i.e., attorneys or law firms). It also includes the scores we allocated to judges based on each outcome. 

OutcomeComplainantRespondentComp. Atty/FirmResp. Atty/FirmJudge
Withdrawn0.250.750.250.75
No Violation01010
Settlement0.50.50.50.5
Consent Order0.750.250.750.25
Violation, Settlement0.750.250.750.251
Violation, LEO/GEO/CDO10101
Default10
Not served

Table 6.1 ITC Outcomes and Scores Applied

If a case was consolidated (e.g., 337-TA-1021), we used the outcome of the final case (i.e., 337-TA-1007). Additionally, if an investigation was terminated due to the invalidation of all asserted claims, we regarded it as equivalent to “no violation” or a victory for the respondent. 

In some instances, not all details relevant to various outcomes are explicitly available within the docket materials. For example, confidential settlement terms or underlying legal strategies may remain undisclosed to the public. Furthermore, we acknowledge and fully concur with the opinions expressed by many surveyed attorneys that determining which party (or attorney/firm) emerged “ahead” in a terminated ITC investigation heavily depends on the specificities of each case (many of which are not accessible from public materials), inevitably leading to some unavoidable imperfections in any scoring system due to the limitations of available case information.

Despite these unavoidable constraints, our aim was to devise a scoring methodology that could be consistently and fairly applied to all parties, considering the limitations of available information, time, and resources. We sought to avoid penalizing or rewarding participants in cases where their impact was minimal or uncertain, and we endeavored to provide as much “partial credit” as feasible to enable each participant to achieve their highest attainable score.

Normalization Using Machine Learning Models 

The outcome of an investigation depends on a multitude of factors, including the complainant, the composition of respondents and their collective actions, the Administrative Law Judge, relevant products and patents, as well as the attorneys and testifying experts involved, among numerous others.

To accurately calculate the success of an individual such as an attorney or an expert, it’s imperative to evaluate their performance independently of other factors. However, isolating their impact in a single case proves challenging, as it’s difficult to disentangle them from existing variables. Yet, leveraging data from numerous investigations enables us to evaluate an individual’s conduct amidst varying conditions across different cases. While this task becomes daunting with hundreds of cases and thousands of individuals, leveraging machine learning techniques, particularly normalization, allows us to mathematically calculate the influence of each variable on an individual’s success or failure. This process enables us to identify the individual’s performance from other factors, revealing their true impact.

The implementation of this normalization using machine learning represents a key advancement in our reports. By applying regression analysis, we adjust the scores of all parties based on the presence of other factors when evaluating their success. Once the adjusted success metrics are determined for all participants in an investigation, we calculate the average success score for each party or their representatives by aggregating the scores across all cases in which they were involved. 

Performance Score

Over time, high activity levels may dilute success scores due to the law of averages, making it impractical to directly compare success scores among firms or attorneys with significantly different workloads. To take this challenge into account, the Performance Score uses both activity and success scores, allowing for a precise ranking and scoring of companies, attorneys, and firms.  

This scoring approach proves valuable as it helps pinpoint attorneys and law firms with not only higher activity levels but also high success rates across cases. Essentially, it identifies those who demonstrate both experience and success in handling ITC Section 337 Investigations, which aligns with our objective.

Complainants and Respondents

Complainants and respondents were evaluated for various ITC outcomes, as outlined in Table 6.1. Withdrawn complaints were allocated a score of 0.25 for complainants and 0.75 for respondents, reflecting the consensus that while withdrawals typically favor the respondent, there are instances where the complainant may benefit (e.g., potential case refiling or companion litigation).

A finding of no violation was attributed a score of 1 for the respondent, acknowledging their successful defense. Conversely, a violation accompanied by limited or general exclusion orders (LEO, GEO) or cease and desist orders (CDO) was scored 1 for the complainant, given the substantive 337 finding and the sought exclusionary remedies. Defaults in analyzed cases, typically linked with a summary violation, were also scored 1 for the complainant and 0 for the respondent.

Regarding settlements before the adjudication of a 337 decision, where settlement agreement terms are undisclosed, both parties were deemed equal winners, each receiving 0.5 points. Settlements following a 337 violation initial finding (Violation, Settlement), were primarily seen as advantageous for the complainant, yet beneficial to the respondent due to the settlement instead of LEO/GEO/CDO, hence scored 0.75/0.25, respectively. 

Consent orders were scored 0.75/0.25, primarily favoring the complainant. Thus, the complainant received the majority of points for consent orders, without being weighed equally to a 337 violation finding. Consent orders likely resulting in a 337 violation finding in a Commission Investigative Staff’s Pre-Hearing Brief were scored 1/0 for complainant/respondent.

Cases with not served status or where no 337 investigations were initiated were not factored into any parties’ scores. (Blank entries in any columns/rows in Table 6.1 indicated the respective participant was not scored for that particular outcome, as detailed here and in subsequent sections for Attorneys, Firms, and Judges.)

Once all Success Scores were calculated for each complainant and respondent across all investigations, the regression model was applied to adjust scores for all parties before calculating the average success scores. 

Attorneys and Firms

Attorneys and law firms, being closely involved in cases on behalf of their clients, were scored identically to their clients (complainant or respondent), except in cases of defaults. We opted to exclude defaults from attorneys’ and firms’ scores, considering their reduced influence on the outcome. Survey results, with 64% agreement, supported this decision to exclude defaults. We recognize that selecting ITC respondents is often part of the strategy, and anticipating defaults may align with the goals of the complainant. While this strategy may involve ITC attorneys, it’s not always the case.

After calculating raw Success Scores for all attorneys and law firms, the regression model was applied to adjust scores in consideration of other factors. Subsequently, the average success score was calculated for each attorney or their law firm. 

Judges

When companies or attorneys review a judge’s record in ITC Section 337 investigations, the primary concern typically revolves around the judge’s inclination towards either the complainant (violation) or the respondent (no violation) when rendering decisions on 337 violations. For all other outcomes listed in Table 6.1, the judge’s involvement is often minimal, and incorporating those outcomes into the judge’s scores would dilute their scores, providing less insight into their actual decision-making in ITC Section 337 investigations. Therefore, judges were only scored in cases where a 337 finding was reached, receiving a score of either 0 for no violation or 1 for violation. Consequently, a judge’s score closer to 1 indicates a greater tendency to decide in favor of the complainant (violation).

This article was helpful?
Happy Neutral Sad
Thanks for letting us know!
Still need some help? Contact Our Support