【SamAltman】奥特曼被燃烧瓶袭击后的回应

2026-04-13 12:081阅读0评论SEO问题
  • 内容介绍
  • 文章标签
  • 相关推荐
问题描述:

搬运以及机翻。

英文原文

Here is a photo of my family. I love them more than anything. Images have power, I hope. Normally we try to be pretty private, but in this case I am sharing a photo in the hopes that it might dissuade the next person from throwing a Molotov cocktail at our house, no matter what they think about me. The first person did it last night, at 3:45 am in the morning. Thankfully it bounced off the house and no one got hurt. Words have power too. There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me. I brushed it aside. Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives. This seems like as good of a time as any to address a few things. First, what I believe. · Working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for me. · AI will be the most powerful tool for expanding human capability and potential that anyone has ever seen. Demand for this tool will be essentially uncapped, and people will do incredible things with it. The world deserves huge amounts of AI and we must figure out how to make it happen. · It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever. We have to get safety right, which is not just about aligning a model—we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future. · AI has to be democratized; power cannot be too concentrated. Control of the future belongs to all people and their institutions. AI needs to empower people individually, and we need to make decisions about our future and the new rules collectively. I do not think it is right that a few AI labs would make the most consequential decisions about the shape of our future. · Adaptability is critical. We are all learning about something new very quickly; some of our beliefs will be right and some will be wrong, and sometimes we will need to change our mind quickly as the technology develops and society evolves. No one understands the impacts of superintelligence yet, but they will be immense. Second, some personal reflections. As I reflect on my own work in the first decade of OpenAI, I can point to a lot of things I‘m proud of and a bunch of mistakes. I was thinking about our upcoming trial with Elon and remembering how much I held the line on not being willing to agree to the unilateral control he wanted over OpenAI. I’m proud of that, and the narrow path we navigated then to allow the continued existence of OpenAI, and all the achievements that followed. I am not proud of being conflict-averse, which has caused great pain for me and OpenAI. I am not proud of handling myself badly in a conflict with our previous board that led to a huge mess for the company. I have made many other mistakes throughout the insane trajectory of OpenAI; I am a flawed person in the center of an exceptionally complex situation, trying to get a little better each year, always working for the mission. We knew going into this how huge the stakes of AI were, and that the personal disagreements between well-meaning people I cared about would be amplified greatly. But it‘s another thing to live through these bitter conflicts and often to have to arbitrate them, and the costs have been serious. I am sorry to people I’ve hurt and wish I had learned more faster. I am also very aware that OpenAI is now a major platform, not a scrappy startup, and we need to operate in a more predictable way now. It has been an extremely intense, chaotic, and high-pressure few years. Mostly though, I am extremely proud that we are delivering on our mission, which seemed incredibly unlikely when we started. Against all odds, we figured out how to build very powerful AI, figured out how to amass enough capital to build the infrastructure to deliver it, figured out how to build a product company and business, figured out how to deliver reasonably safe and robust services at a massive scale, and much more. A lot of companies say they are going to change the world; we actually did. Third, some thoughts about the industry. My personal takeaway from the last several years, and take on why there has been so much Shakespearean drama between the companies in our field, comes down to this: “Once you see AGI you can‘t unsee it.” It has a real “ring of power” dynamic to it, and makes people do crazy things. I don’t mean that AGI is the ring itself, but instead the totalizing philosophy of “being the one to control AGI”. The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring. The two obvious ways to do this are individual empowerment and making sure democratic system stays in control. It is important that the democratic process remains more powerful than companies. Laws and norms are going to change, but we have to work within the democratic process, even though it will be messy and slower than we‘d like. We want to be a voice and a stakeholder, but not to have all the power. A lot of the criticism of our industry comes from sincere concern about the incredibly high stakes of this technology. This is quite valid, and we welcome good-faith criticism and debate. I empathize with anti-technology sentiments and clearly technology isn’t always good for everyone. But overall, I believe technological progress can make the future unbelievably good, for your family and mine. While we have that debate, we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally.

中译:

这是我的家人照片。我爱他们胜过一切。

我希望影像也能有力量。我们平时尽量保持私密,但这次我愿意分享这张照片,是希望能劝阻下一个人——无论他们对我有怎样的看法——不要再往我们家扔燃烧瓶。

第一个人昨晚凌晨 3:45 这么做了。谢天谢地,燃烧瓶从房子上弹开了,没人受伤。

言语同样拥有力量。几天前,有一篇关于我的煽动性报道。昨天有人跟我说,那篇文章发布的时间点正值人们对 AI 高度焦虑,这会让我的处境变得更危险。我当时没在意。

现在我半夜醒来,满心愤怒,意识到自己低估了语言和叙事的力量。这似乎是个合适的时机,来说几件事。

第一,我相信什么。

· 为所有人谋求繁荣、赋能每一个人、推动科学和技术进步,对我而言是道德责任。
· AI 将是有史以来人类拓展能力与潜能最强大的工具。对这种工具的需求几乎是无限的,人们将用它做出不可思议的事情。这个世界理应拥有海量的 AI,我们必须想办法实现它。
· 一切不会都顺利。对 AI 的恐惧和焦虑是有道理的;我们正在见证社会经历很长一段时间以来、甚至有史以来最大的一次变革。我们必须确保安全,这不仅仅是让模型对齐的问题——我们急需全社会层面的应对能力,来抵御新的威胁。这包括制定新政策,帮助度过艰难的经济转型期,最终通往一个更好的未来。
· AI 必须民主化,权力不能过于集中。未来的控制权属于所有人及其制度。AI 需要赋能每一个人,我们需要共同决定我们的未来和新规则。我不认为应该由少数几个 AI 实验室来决定我们未来形态中最关键的决策。
· 适应能力至关重要。我们都在非常快速地学习新事物;有些信念会是正确的,有些会是错误的,有时随着技术发展和社会演变,我们需要迅速改变自己的想法。目前还没有人能理解超级智能的影响,但它们将是巨大的。

第二,一些个人反思。

回顾我在 OpenAI 第一个十年的工作,我能指出很多值得自豪的事情,也犯了不少错误。

我想到我们即将与 Elon 对簿公堂,回忆起当时为了不答应他对 OpenAI 的绝对控制权,我是如何坚守底线的。我为此感到自豪,也为我们当时走出的那条狭窄道路——让 OpenAI 得以继续存在,以及后来所有的成就——感到自豪。

我并不为自己害怕冲突而感到自豪,这给我和 OpenAI 都带来了巨大的痛苦。我也不为自己在上一次与前任董事会的冲突中表现不当、给公司造成巨大混乱而感到自豪。在 OpenAI 这段疯狂的历程中,我还犯过很多其他错误;我是一个身处异常复杂环境中心、充满缺陷的人,每年都在努力变得更好一点,始终为了使命而工作。我们一开始就知道 AI 的赌注有多高,我关心的那些善意的人之间的个人分歧会被极度放大。但要亲身经历这些激烈的冲突、并经常不得不去仲裁它们,则是另一回事,代价是沉重的。我向那些被我伤害过的人道歉,并希望自己能学得更快一些。

我也非常清楚,OpenAI 现在已经是一个主流平台,而不是一个草根初创公司,我们现在需要以更可预测的方式运作。过去的几年极其紧张、混乱且压力巨大。

但更重要的是,我极为自豪的是,我们正在实现我们的使命——这在刚开始时几乎是不可思议的。在重重困难之下,我们设法建造出了非常强大的 AI,设法筹集到足够的资金来构建交付 AI 所需的基础设施,设法打造了一家产品公司和商业模式,设法在大规模下提供了相当安全且稳健的服务,等等。
很多公司都说要改变世界;我们真的做到了。

第三,一些关于行业的思考。

过去几年我个人的感悟,以及对于我们这个领域的公司之间为何有如此多莎士比亚式的戏剧冲突的看法,可以归结为这句话:“一旦你看到了 AGI(通用人工智能),你就再也无法无视它。” 它有一种真正的“权力之戒”般的动力,让人们做出疯狂的事情。我不是说 AGI 本身就是那枚戒指,而是“成为控制 AGI 的那个人”这种总体化的哲学。

我能想到的唯一解决方案,是让技术广泛地面向大众,并且让任何人都没有那枚戒指。实现这一点的两个显而易见的方法是:赋能个人,以及确保民主制度始终处于控制地位。

民主进程必须比公司更强大,这一点很重要。法律和规范会改变,但我们必须在这个民主进程中工作,尽管它会一团乱麻、比我们期望的更慢。我们希望成为一个发声者和利益相关者,而不是拥有全部权力。

对我们行业的很多批评,源于人们对于这项技术极其高风险的真切担忧。这是非常有道理的,我们欢迎善意的批评和辩论。我理解反技术的情感,显然技术并不总是对每个人都有好处。但总的来说,我相信技术进步可以让未来变得难以置信地美好——为了你的家人,也为了我的家人。

在我们进行这场辩论的同时,我们应该降低言辞和手段的火药味,并努力让更少的爆炸发生在更少的家中——无论是字面意义上,还是比喻意义上。

网友解答:
--【壹】--:

“今天凌晨,有人向Sam Altman的住宅投掷了燃烧瓶,并威胁了我们在旧金山的总部。所幸无人受伤。”

“我们非常感谢旧金山警方的快速响应,以及市政府在保障员工安全方面给予的支持。该嫌疑人现已被拘留,我们正在配合执法部门开展调查。”

来源

· KQED News: “Man Threw Molotov at Sam Altman’s Home, Then Threatened to Burn Down OpenAI, Police Say” (April 10, 2026)


--【贰】--:

已经很难想象没有sam的oai了 。


--【叁】--:

我靠,不行了,什么鬼笑死我了,好魔幻的现实


--【肆】--:

OpenAI官方声明原文

“Early this morning, someone threw a Molotov cocktail at Sam Altman’s home and also made threats at our San Francisco headquarters. Thankfully, no one was hurt.”

“We deeply appreciate how quickly SFPD responded and the support from the city in helping keep our employees safe. The individual is in custody, and we’re assisting law enforcement with their investigation.”

——OpenAI发言人Jamie Radice


--【伍】--:

老外猎巫起来还真是原汁原味,火刑都上了是吧


--【陆】--:

我不行了,请原谅我笑出来了 @gushang01


--【柒】--:

“A 20-year-old man was arrested Friday after throwing a Molotov cocktail at the home of OpenAI CEO Sam Altman in San Francisco’s Russian Hill neighborhood and threatening to burn down the company’s headquarters, according to authorities and the company.”

推文链接:https://twitter.com/sfchronicle/status/1650327689042612224


--【捌】--:

编辑了好多字

最后还是删了

奥特曼这种说自己有底线实际上出尔反尔底线不清的人,到家人有危险开始哭的时候,还是只敢左右逢源拿不出个ceo该有的严正“态度”出来


--【玖】--:

万幸没有出事,奥特曼真的是OpenAI的精神支柱了


--【拾】--:

GPT:顺手再给扔燃烧瓶的恶徒补一刀.


--【拾壹】--:

确实

确实比隔壁a除好多了


--【拾贰】--:

对中国已经可以算是google这一层级了,算是睁一只眼闭一只眼


--【拾叁】--:

为什么sam没似?因为燃烧瓶被GPT稳稳接住了()


--【拾肆】--:

希望能劝阻下一个人——无论他们对我有怎样的看法——不要再往我们家扔燃烧瓶。

莫名有点萌


--【拾伍】--:

我其实更想知道为啥没人给A\丢燃烧瓶。。。sam真的,罪不至此


--【拾陆】--:

真离谱,发言还是有点水平的。总的来说,OpenAI作为行业带头人,做的一些尝试还是不错的,对华也还算友好(点名批评A某)。


--【拾柒】--:

不开源的下场,没有人想要一个新的科技寡头!


--【拾捌】--:

奥特曼一定觉得他丈夫和孩子老可爱老能打动人了

标签:OpenAI纯水
问题描述:

搬运以及机翻。

英文原文

Here is a photo of my family. I love them more than anything. Images have power, I hope. Normally we try to be pretty private, but in this case I am sharing a photo in the hopes that it might dissuade the next person from throwing a Molotov cocktail at our house, no matter what they think about me. The first person did it last night, at 3:45 am in the morning. Thankfully it bounced off the house and no one got hurt. Words have power too. There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me. I brushed it aside. Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives. This seems like as good of a time as any to address a few things. First, what I believe. · Working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for me. · AI will be the most powerful tool for expanding human capability and potential that anyone has ever seen. Demand for this tool will be essentially uncapped, and people will do incredible things with it. The world deserves huge amounts of AI and we must figure out how to make it happen. · It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever. We have to get safety right, which is not just about aligning a model—we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future. · AI has to be democratized; power cannot be too concentrated. Control of the future belongs to all people and their institutions. AI needs to empower people individually, and we need to make decisions about our future and the new rules collectively. I do not think it is right that a few AI labs would make the most consequential decisions about the shape of our future. · Adaptability is critical. We are all learning about something new very quickly; some of our beliefs will be right and some will be wrong, and sometimes we will need to change our mind quickly as the technology develops and society evolves. No one understands the impacts of superintelligence yet, but they will be immense. Second, some personal reflections. As I reflect on my own work in the first decade of OpenAI, I can point to a lot of things I‘m proud of and a bunch of mistakes. I was thinking about our upcoming trial with Elon and remembering how much I held the line on not being willing to agree to the unilateral control he wanted over OpenAI. I’m proud of that, and the narrow path we navigated then to allow the continued existence of OpenAI, and all the achievements that followed. I am not proud of being conflict-averse, which has caused great pain for me and OpenAI. I am not proud of handling myself badly in a conflict with our previous board that led to a huge mess for the company. I have made many other mistakes throughout the insane trajectory of OpenAI; I am a flawed person in the center of an exceptionally complex situation, trying to get a little better each year, always working for the mission. We knew going into this how huge the stakes of AI were, and that the personal disagreements between well-meaning people I cared about would be amplified greatly. But it‘s another thing to live through these bitter conflicts and often to have to arbitrate them, and the costs have been serious. I am sorry to people I’ve hurt and wish I had learned more faster. I am also very aware that OpenAI is now a major platform, not a scrappy startup, and we need to operate in a more predictable way now. It has been an extremely intense, chaotic, and high-pressure few years. Mostly though, I am extremely proud that we are delivering on our mission, which seemed incredibly unlikely when we started. Against all odds, we figured out how to build very powerful AI, figured out how to amass enough capital to build the infrastructure to deliver it, figured out how to build a product company and business, figured out how to deliver reasonably safe and robust services at a massive scale, and much more. A lot of companies say they are going to change the world; we actually did. Third, some thoughts about the industry. My personal takeaway from the last several years, and take on why there has been so much Shakespearean drama between the companies in our field, comes down to this: “Once you see AGI you can‘t unsee it.” It has a real “ring of power” dynamic to it, and makes people do crazy things. I don’t mean that AGI is the ring itself, but instead the totalizing philosophy of “being the one to control AGI”. The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring. The two obvious ways to do this are individual empowerment and making sure democratic system stays in control. It is important that the democratic process remains more powerful than companies. Laws and norms are going to change, but we have to work within the democratic process, even though it will be messy and slower than we‘d like. We want to be a voice and a stakeholder, but not to have all the power. A lot of the criticism of our industry comes from sincere concern about the incredibly high stakes of this technology. This is quite valid, and we welcome good-faith criticism and debate. I empathize with anti-technology sentiments and clearly technology isn’t always good for everyone. But overall, I believe technological progress can make the future unbelievably good, for your family and mine. While we have that debate, we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally.

中译:

这是我的家人照片。我爱他们胜过一切。

我希望影像也能有力量。我们平时尽量保持私密,但这次我愿意分享这张照片,是希望能劝阻下一个人——无论他们对我有怎样的看法——不要再往我们家扔燃烧瓶。

第一个人昨晚凌晨 3:45 这么做了。谢天谢地,燃烧瓶从房子上弹开了,没人受伤。

言语同样拥有力量。几天前,有一篇关于我的煽动性报道。昨天有人跟我说,那篇文章发布的时间点正值人们对 AI 高度焦虑,这会让我的处境变得更危险。我当时没在意。

现在我半夜醒来,满心愤怒,意识到自己低估了语言和叙事的力量。这似乎是个合适的时机,来说几件事。

第一,我相信什么。

· 为所有人谋求繁荣、赋能每一个人、推动科学和技术进步,对我而言是道德责任。
· AI 将是有史以来人类拓展能力与潜能最强大的工具。对这种工具的需求几乎是无限的,人们将用它做出不可思议的事情。这个世界理应拥有海量的 AI,我们必须想办法实现它。
· 一切不会都顺利。对 AI 的恐惧和焦虑是有道理的;我们正在见证社会经历很长一段时间以来、甚至有史以来最大的一次变革。我们必须确保安全,这不仅仅是让模型对齐的问题——我们急需全社会层面的应对能力,来抵御新的威胁。这包括制定新政策,帮助度过艰难的经济转型期,最终通往一个更好的未来。
· AI 必须民主化,权力不能过于集中。未来的控制权属于所有人及其制度。AI 需要赋能每一个人,我们需要共同决定我们的未来和新规则。我不认为应该由少数几个 AI 实验室来决定我们未来形态中最关键的决策。
· 适应能力至关重要。我们都在非常快速地学习新事物;有些信念会是正确的,有些会是错误的,有时随着技术发展和社会演变,我们需要迅速改变自己的想法。目前还没有人能理解超级智能的影响,但它们将是巨大的。

第二,一些个人反思。

回顾我在 OpenAI 第一个十年的工作,我能指出很多值得自豪的事情,也犯了不少错误。

我想到我们即将与 Elon 对簿公堂,回忆起当时为了不答应他对 OpenAI 的绝对控制权,我是如何坚守底线的。我为此感到自豪,也为我们当时走出的那条狭窄道路——让 OpenAI 得以继续存在,以及后来所有的成就——感到自豪。

我并不为自己害怕冲突而感到自豪,这给我和 OpenAI 都带来了巨大的痛苦。我也不为自己在上一次与前任董事会的冲突中表现不当、给公司造成巨大混乱而感到自豪。在 OpenAI 这段疯狂的历程中,我还犯过很多其他错误;我是一个身处异常复杂环境中心、充满缺陷的人,每年都在努力变得更好一点,始终为了使命而工作。我们一开始就知道 AI 的赌注有多高,我关心的那些善意的人之间的个人分歧会被极度放大。但要亲身经历这些激烈的冲突、并经常不得不去仲裁它们,则是另一回事,代价是沉重的。我向那些被我伤害过的人道歉,并希望自己能学得更快一些。

我也非常清楚,OpenAI 现在已经是一个主流平台,而不是一个草根初创公司,我们现在需要以更可预测的方式运作。过去的几年极其紧张、混乱且压力巨大。

但更重要的是,我极为自豪的是,我们正在实现我们的使命——这在刚开始时几乎是不可思议的。在重重困难之下,我们设法建造出了非常强大的 AI,设法筹集到足够的资金来构建交付 AI 所需的基础设施,设法打造了一家产品公司和商业模式,设法在大规模下提供了相当安全且稳健的服务,等等。
很多公司都说要改变世界;我们真的做到了。

第三,一些关于行业的思考。

过去几年我个人的感悟,以及对于我们这个领域的公司之间为何有如此多莎士比亚式的戏剧冲突的看法,可以归结为这句话:“一旦你看到了 AGI(通用人工智能),你就再也无法无视它。” 它有一种真正的“权力之戒”般的动力,让人们做出疯狂的事情。我不是说 AGI 本身就是那枚戒指,而是“成为控制 AGI 的那个人”这种总体化的哲学。

我能想到的唯一解决方案,是让技术广泛地面向大众,并且让任何人都没有那枚戒指。实现这一点的两个显而易见的方法是:赋能个人,以及确保民主制度始终处于控制地位。

民主进程必须比公司更强大,这一点很重要。法律和规范会改变,但我们必须在这个民主进程中工作,尽管它会一团乱麻、比我们期望的更慢。我们希望成为一个发声者和利益相关者,而不是拥有全部权力。

对我们行业的很多批评,源于人们对于这项技术极其高风险的真切担忧。这是非常有道理的,我们欢迎善意的批评和辩论。我理解反技术的情感,显然技术并不总是对每个人都有好处。但总的来说,我相信技术进步可以让未来变得难以置信地美好——为了你的家人,也为了我的家人。

在我们进行这场辩论的同时,我们应该降低言辞和手段的火药味,并努力让更少的爆炸发生在更少的家中——无论是字面意义上,还是比喻意义上。

网友解答:
--【壹】--:

“今天凌晨,有人向Sam Altman的住宅投掷了燃烧瓶,并威胁了我们在旧金山的总部。所幸无人受伤。”

“我们非常感谢旧金山警方的快速响应,以及市政府在保障员工安全方面给予的支持。该嫌疑人现已被拘留,我们正在配合执法部门开展调查。”

来源

· KQED News: “Man Threw Molotov at Sam Altman’s Home, Then Threatened to Burn Down OpenAI, Police Say” (April 10, 2026)


--【贰】--:

已经很难想象没有sam的oai了 。


--【叁】--:

我靠,不行了,什么鬼笑死我了,好魔幻的现实


--【肆】--:

OpenAI官方声明原文

“Early this morning, someone threw a Molotov cocktail at Sam Altman’s home and also made threats at our San Francisco headquarters. Thankfully, no one was hurt.”

“We deeply appreciate how quickly SFPD responded and the support from the city in helping keep our employees safe. The individual is in custody, and we’re assisting law enforcement with their investigation.”

——OpenAI发言人Jamie Radice


--【伍】--:

老外猎巫起来还真是原汁原味,火刑都上了是吧


--【陆】--:

我不行了,请原谅我笑出来了 @gushang01


--【柒】--:

“A 20-year-old man was arrested Friday after throwing a Molotov cocktail at the home of OpenAI CEO Sam Altman in San Francisco’s Russian Hill neighborhood and threatening to burn down the company’s headquarters, according to authorities and the company.”

推文链接:https://twitter.com/sfchronicle/status/1650327689042612224


--【捌】--:

编辑了好多字

最后还是删了

奥特曼这种说自己有底线实际上出尔反尔底线不清的人,到家人有危险开始哭的时候,还是只敢左右逢源拿不出个ceo该有的严正“态度”出来


--【玖】--:

万幸没有出事,奥特曼真的是OpenAI的精神支柱了


--【拾】--:

GPT:顺手再给扔燃烧瓶的恶徒补一刀.


--【拾壹】--:

确实

确实比隔壁a除好多了


--【拾贰】--:

对中国已经可以算是google这一层级了,算是睁一只眼闭一只眼


--【拾叁】--:

为什么sam没似?因为燃烧瓶被GPT稳稳接住了()


--【拾肆】--:

希望能劝阻下一个人——无论他们对我有怎样的看法——不要再往我们家扔燃烧瓶。

莫名有点萌


--【拾伍】--:

我其实更想知道为啥没人给A\丢燃烧瓶。。。sam真的,罪不至此


--【拾陆】--:

真离谱,发言还是有点水平的。总的来说,OpenAI作为行业带头人,做的一些尝试还是不错的,对华也还算友好(点名批评A某)。


--【拾柒】--:

不开源的下场,没有人想要一个新的科技寡头!


--【拾捌】--:

奥特曼一定觉得他丈夫和孩子老可爱老能打动人了

标签:OpenAI纯水