科学社会主义者    发表于  3 天前 | 显示全部楼层 |阅读模式 21 0
Legal debates surrounding AI agents are increasingly dominated by calls for comprehensive reform, including proposals to grant legal personhood to AI. Notably, "tort law" is often deemed unable to keep pace with rapid technological change. Due to the potential unpredictability and opacity of AI systems, critics argue that harms caused by AI challenge the foundations of tort law. These claims often reflect "technological exceptionalism"—the notion that emerging technologies are so novel or disruptive that they require entirely new legal frameworks—and fuel debates about the need for a distinct legal regime specifically for AI agents.
Liability for Harm Caused by AI Agents.png
This argument that "existing liability systems are inadequate to meet the challenges" follows a common pattern. Typically, when a new technology emerges or societal changes occur, scholars and policymakers warn of unique risks, emphasize its alleged distinctive technical characteristics, and call for specialized legal frameworks. Yet, long-standing legal frameworks that have adapted to profound social and technological transformations over decades or even centuries with minimal modifications are often overlooked.

The author contends that today’s AI agents may be closer to traditional products than commonly perceived. Existing principles of negligence and product liability, with targeted adjustments, can effectively address AI-related harms without the need for radical reform.

I. Legal and Economic Analysis: Normative Parallels Between AI Agents and Traditional Products

To develop this argument, adopting a law and economics perspective is useful, as it provides a unique normative lens for analyzing the issue. This approach does not merely focus on the formal conditions and limitations of tort law but explores, from an economic standpoint, the extent to which liability rules incentivize certain behaviors. Its core premise is that individuals and businesses are at least partially driven by profit. If a company must decide whether to invest more in safety when launching an AI agent, it may forgo such investment if the expected cost of compensation is lower than the cost of additional preventive measures. While companies and individuals typically also weigh ethical concerns and reputational risks, economic benchmarks offer a useful framework for guiding the behavior of AI developers and users.

From a normative perspective, AI agents share significant similarities with traditional products.

Despite differences in specific obligations and the potential unpredictability and opacity of AI, developers, like those of traditional products, bear a duty to prevent foreseeable harms. They are expected to design and train AI systems with due care, use adequate data, and implement safety measures to address known risks; they are also required to ensure their AI systems are as accurate as possible. Furthermore, they can reduce the risk of harm by ensuring their agents are interpretable, thereby enabling a degree of oversight.

Like any other product, the producer—i.e., the AI agent or system developer—is generally best positioned to address and mitigate the product’s risks. Through choices regarding training data, training scale, model design, and safety measures, developers possess unique capabilities to prevent or rectify safety issues. They can also allocate the costs of these measures through pricing, which aligns with the law and economics notion of "developers as cost avoiders." This provides an important normative basis for ultimately holding developers liable for harms caused by their AI agents or systems. The opacity and complexity of AI further reinforce this normative conclusion, as identifying or fixing flaws is difficult for anyone other than the developer.

At the same time, developers are not the sole relevant party. It would be meaningless to hold only developers liable if a doctor uses an AI agent despite knowing its error rate. Similarly, it would be inappropriate to attribute full responsibility to developers if someone chooses to use ChatGPT instead of a lawyer knowing it is improper to do so. In such cases, incentivizing responsible use by users is relatively straightforward: when users are aware of a product’s dangers, liability rules can deter them from using it in foreseeable ways that would cause harm. This suggests that, from a normative perspective, developers should inform users of the limitations and risks of AI systems. Admittedly, for inherently complex systems—such as autonomous vehicles—users’ ability to understand AI agent risks may be limited, in which case shifting liability away from developers is illogical. Crucially, users’ liability should be proportional to their capacity to make informed deployment decisions.

Holding users rather than developers liable is particularly normatively important when users can understand the relevant risks, as encouraging developers to release AI agents and systems with limitations may be socially desirable. Tools themselves may be flawed or even capable of causing harm, but they can still deliver significant benefits when used appropriately. Just as we would not and should not blindly hold OpenAI liable if a doctor treats a patient based solely on GPT-5’s symptom analysis rather than medical expertise, developers cannot reasonably be expected to bear full responsibility for such misuse. Ensuring user accountability prevents incentive failures that could otherwise deter developers from providing valuable tools.

However, challenges remain. Beyond complexity, the rapid evolution of AI means its continued development and application may generate social benefits that extend beyond the interests of developers or users themselves. This phenomenon—often referred to as "positive externalities"—reflects a gap between future social gains (e.g., reduced accidents from autonomous vehicles) and current user expectations, which are often limited to immediate utility. Consequently, even if continuous improvement can generate public value over time, developers may struggle to market AI solutions in the early stages. Because this future value is difficult to realize today, imposing liability on developers and users prematurely may inhibit innovation, potentially delaying or even preventing progress that could yield long-term social benefits.

II. The Core of Liability Allocation: Balancing the Obligations of Developers and Users

Setting aside these normative arguments, the U.S. liability system may not allocate these risks appropriately. Under the existing U.S. liability framework, negligence is the default general liability, while product liability—focused on defective products—is a specialized, stricter regime designed to hold manufacturers accountable. Under a "strict liability" system, developers or users are liable for any harms caused regardless of their conduct.

Under negligence liability, a victim can recover compensation if they can prove that their harm was caused by the negligent conduct of another party (in this case, the AI agent’s developer or user). However, negligence liability narrows the scope of liability: it requires proving that the developer or user failed to act as a reasonable person would have in developing or using the agent. Thus, negligence liability is more limited compared to a "strict liability" system.

Furthermore, AI agents possess unique characteristics—such as autonomy, imperfection, unpredictability, and opacity—that pose significant challenges to the assessment under negligence tort law. The complexity and unpredictability of AI agents make it difficult for victims to prove negligence and causation. For example, it may be hard to determine whether harm stems from inadequate AI development or merely from inherent flaws in an otherwise well-performing system, as discussed in more detail below.

While negligence tort law includes principles to address some of these issues—such as the Learned Hand test, which explicitly states that liability arises only when the cost of prevention is less than the expected cost of harm—these principles struggle to cope with the challenges posed by AI’s complex characteristics.

Causation is a core element of negligence liability. In the AI context, we often cannot determine whether harm was caused by negligence in the development or use of the AI agent, or whether it would have occurred even with the exercise of due care. However, existing legal systems offer useful guidance. For instance, medical malpractice law addresses situations where it is impossible to determine whether improper conduct directly caused harm. If a doctor fails to provide appropriate treatment, leading to patient suffering, we may never know if the outcome would have been different had the doctor acted more carefully—because treatment may not be effective in all cases. In such instances, courts treat the doctor as having deprived the patient of a chance to avoid harm and award compensation equal to the total damages multiplied by the probability of causation. A similar "loss of chance" framework can be applied to the AI context.

Nevertheless, the complexity of most AI systems makes the assessment of causation and negligence far from straightforward, especially for lawyers and judges lacking technical expertise. Due to the information and expertise asymmetry between developers or users and victims, victims also face significant challenges in evaluating system performance. In light of this, the European Union once considered introducing a presumption of causation to assist victims of AI-related harms, but the proposal was ultimately not adopted. Under such a presumption, if a victim can prove the existence of fault and harm, a causal link between the fault and harm is presumed.

Without such a presumption or similar tools, victims of AI-related harms are left to bear the consequences alone. This not only undermines the compensation mechanism but also reduces the incentive for developers and users to prevent harm.

III. Practical Challenges: The Limitations of Negligence Liability in AI Agent Applications

While negligence liability is the default framework, it is not the only regime applicable to AI agents and systems. Many of the law and economics principles outlined above, which primarily assign liability to developers, also apply to products. Thus, product liability provides an independent, two-part pathway for holding manufacturers accountable:

The first component is design defects, which focus on how a product is designed. Manufacturers are typically held liable if the product could have been reasonably made safer;

The second component is manufacturing defects, where manufacturers are subject to strict liability—i.e., liability without fault—when harm is caused by a defect introduced during the manufacturing process.

In the AI context, manufacturing defects and their associated "strict" liability regime are particularly noteworthy. At first glance, this appears to offer a way to hold developers liable regardless of negligence, consistent with the law and economics analysis above. However, the problem is that most erroneous AI outputs stem from the training process rather than manufacturing defects, such as sensor or component failures. Consequently, harmful AI outputs are generally treated as design defects, and the risk-utility test is commonly applied, thereby limiting the scope of strict liability and ultimately resulting in the application of negligence-based design defect rules.

Nevertheless, product liability still provides valuable tools. Notably, it imposes a "duty to warn," which echoes the obligations identified from the law and economics perspective. Developers must provide warnings to enable users to foresee and mitigate harm. However, due to the complexity of many AI systems, the effectiveness of this duty is limited, making it difficult to communicate risks comprehensively or in detail.

IV. Reform Path: Targeted Adjustments Over Disruptive Reconstruction

Based on the comprehensive analysis, the following conclusions can be drawn regarding today’s AI agents:

First, if AI continues to evolve rapidly, more fundamental reforms may be necessary. But for now, existing AI agents and systems should be subject to roughly the same regulatory mechanisms as traditional products, with two caveats. First, as AI capabilities advance, we may need to place greater emphasis on the behavior of the AI agent itself, not just the humans associated with it; second, we need to make certain adjustments to address the complexity of certain systems—this requires imposing stricter accountability on developers who may be unable to fully inform users.

Second, in some cases, society should bear part of the risks of AI development and use to incentivize innovation with long-term public benefits.

Current tort law does not adequately provide these incentives. The traditional distinction between design defects and manufacturing defects is not applicable to AI agents, especially when risks stem from flawed or biased training data, which are typically treated as design defects. However, poor-quality training data that leads to inadequate AI agent performance may be functionally equivalent to manufacturing defects, thereby subjecting developers to strict liability. Without such adjustments, existing systems (such as negligence liability) cannot provide sufficient incentives for AI safety. However, when users recklessly use AI agents or systems despite knowing their risks, negligence liability can play a key role in holding them accountable.

Both negligence liability and product liability already contain many of the elements needed to address AI-related harms: negligence liability remains critical for regulating user behavior when users are aware of AI system risks; while product liability clarifies developers’ responsibility for design or training defects.

What we need is not radical reform, but cautious adjustments, and potentially the extension of certain domain-specific systems (such as medical malpractice compensation systems) to the broader AI context. Combined with higher technical literacy among legal professionals, this approach is the best way to ensure that liability for AI agents falls on the parties who truly should bear it.

您需要登录后才可以回帖 登录 | 立即注册

Archiver|手机版| 关于我们

Copyright © 2001-2025, 公路边.    Powered by 公路边 |网站地图

GMT+8, 2025-12-12 19:05 , Processed in 0.127339 second(s), 31 queries .