华为盘古开源大模型被 HonestAGI 团队质疑抄袭 Qwen

2026-04-11 12:290阅读0评论SEO教程
  • 内容介绍
  • 文章标签
  • 相关推荐
问题描述:

HonestAGI 研究团队在研究通过分析大语言模型注意力参数标准差模式来识别模型指纹时发现:华为盘古 Pro MoE 模型与 Qwen-2.5 14B 模型存在 0.927 的极高相关性

https://github.com/HonestAGI/LLM-Fingerprint

盘古团队在 issue8 中与 HonestAGI 团队吵起来了:

github.com/HonestAGI/LLM-Fingerprint

Pangu response

已打开 02:04AM - 04 Jul 25 UTC 4n0nym0u5-end

The lead developer of Pangu LLM clarified internally that your evaluation method…ology is highly unscientific, as demonstrated below: Using the method described in your paper, the following model comparisons were evaluated: - pangu-72b-a16b vs. Qwen2.5-14b = 0.92 - baichuan2-13b vs. Qwen1.5-14b = 0.87 - baichuan2-13b vs. pangu-72b-a16b = 0.84 - baichuan2-13b vs. Qwen2.5-14b = 0.86 Models with different numbers of layers also yield highly similar results under your

阅读全文
标签:人工智能
问题描述:

HonestAGI 研究团队在研究通过分析大语言模型注意力参数标准差模式来识别模型指纹时发现:华为盘古 Pro MoE 模型与 Qwen-2.5 14B 模型存在 0.927 的极高相关性

https://github.com/HonestAGI/LLM-Fingerprint

盘古团队在 issue8 中与 HonestAGI 团队吵起来了:

github.com/HonestAGI/LLM-Fingerprint

Pangu response

已打开 02:04AM - 04 Jul 25 UTC 4n0nym0u5-end

The lead developer of Pangu LLM clarified internally that your evaluation method…ology is highly unscientific, as demonstrated below: Using the method described in your paper, the following model comparisons were evaluated: - pangu-72b-a16b vs. Qwen2.5-14b = 0.92 - baichuan2-13b vs. Qwen1.5-14b = 0.87 - baichuan2-13b vs. pangu-72b-a16b = 0.84 - baichuan2-13b vs. Qwen2.5-14b = 0.86 Models with different numbers of layers also yield highly similar results under your

阅读全文
标签:人工智能