Fiona    发表于  昨天 06:39 | 显示全部楼层
Current large model architectures are becoming increasingly complex. To better adapt to specific scenarios and improve performance, vendors have made numerous modifications—introducing MoE, expanding context windows, and implementing all sorts of unconventional enhancements—resulting in significant divergence among large models.

In theory, closed-source models can be deployed faster and more flexibly, as they don’t need to worry about declassifying internal components, and their development priorities aren’t influenced by community demands. However, open-sourcing also serves as effective marketing, generating greater visibility and providing commercial support for ongoing development. Overall, I believe closed-source models may hold a stronger advantage in the future—after all, leading players aren’t short on funding or monetization channels.
得失皆成过往    发表于  昨天 06:40 | 显示全部楼层
The gap is widening. Open-source models have largely hit the ceiling of what’s achievable with the current technical stack.

Recent open-source models—DeepSeek 3.2, Kimi-K2, Qwen3, and GLM-4.6—are all quite similar in capability, with minimal differences in technical approach; most variations now lie in implementation details.

In contrast, closed-source models show clearer progress: Gemini 3 series has improved significantly, Claude 4.5 is also notably better, while OpenAI is harder to assess—its updates feel patchy, and GPT-5.2 isn’t particularly impressive.

In terms of engineering readiness—especially for programming—closed-source models are far more usable in practice. Claude, in particular, stands out.

Open-source models may suffice for small projects, but their suitability for real production environments remains questionable.

I believe this stems mainly from two constraints: compute limitations and training data scarcity.

Closed-source models still rely fundamentally on scaling—using massive compute to train on vast datasets, gradually mitigating hallucinations through data volume and quality.

In China, compute resources are constrained, and high-quality training data is scarce. Most top-tier academic papers are in English and under copyright, limiting accessibility.

Moreover, when it comes to code, domestic internet companies simply don’t have as much proprietary code as giants like Google. Google can let its AI learn from its own internal, closed-source codebase—but Qwen, as an open-source model, cannot easily access Alibaba’s private code due to serious security and confidentiality concerns.

This gives Google a tremendous advantage.

Google Search has amassed enormous data, its cloud infrastructure provides ample compute, YouTube contributes rich video content, it has deep financial resources, and it designs its own TPUs.

Overall, Google maintains a substantial lead.

Among Chinese players, Alibaba might be in the best position—but its data is fragmented. Alibaba has cloud and e-commerce data, ByteDance owns short-video data, Baidu holds search data, while DeepSeek lacks access to any high-quality proprietary dataset.

The reason Chinese AI models are somewhat competent in coding is largely thanks to the abundance of open-source projects that can be used as training data. But in other specialized domains, their capabilities still lag noticeably.
稀帅    发表于  昨天 06:40 | 显示全部楼层
I understand that the gap between open-source and closed-source large language models is narrowing. Especially as large model technologies continue to evolve, many open-source models—through increased parameter counts and optimized training techniques—have now reached performance levels comparable to mainstream closed-source models in general benchmarks. In everyday scenarios such as casual conversation and information summarization, ordinary users can hardly perceive any noticeable difference in performance.

Although closed-source models excel in general capabilities, they are not necessarily tailored for specialized domains like finance or healthcare. In contrast, open-source models can be fine-tuned for customization, enabling performance breakthroughs in specific use cases. Practical evidence shows that open-source models, when fine-tuned with high-quality domain-specific data, can achieve over 95% of the performance of closed-source models in targeted fields while reducing costs by more than 50%.

That said, closed-source models still hold performance advantages in certain high-end, specialized scenarios—particularly in complex logical reasoning, deep multimodal analysis, and demanding creative writing tasks—where they maintain a clear lead.

Therefore, overall, closed-source models are not universally superior to open-source ones. The relative strengths and weaknesses of each should be evaluated based on the specific application scenario.
12
您需要登录后才可以回帖 登录 | 立即注册

Archiver|mobile| 关于我们

Copyright © 2001-2025, 公路边.    Powered by 公路边 |网站地图

GMT+8, 2026-1-2 13:08 , Processed in 0.099966 second(s), 27 queries .