Their trajectory was clear the moment they signed a deal with Microsoft if not sooner.
Absolute snakes - if it's more profitable to manipulate you with outputs or steal your work, they will. Every cent and byte of data they're given will be used to support authoritarianism.
I agree with ya. You aren't alone in this. For what its worth, Chatgpt subscriptions have been cancelled or that number has risen ~300% in the last month.
Also, Anthropic/Gemini/even Kimi models are pretty good for what its worth. I used to use chatgpt and I still sometimes accidentally open it but I use Gemini/Claude nowadays and I personally find them to be better anyways too.
Govt. contracts and terms allowing autonomous drone machines which can kill without any human in the loop have a very large difference
I know the difference between this is none but to me, its that Anthropic stood for what it thought was right. It had drew a line even if it may have costed some money and literally have them announced as supply chain and see all the fallout from that in that particular relevant thread.
As a person, although I am not fan of these companies in general and yes I love oss-models. But I still so so much appreciate atleast's anthropic's line of morality which many people might seem insignificant but to me it isn't.
So for the workflows that I used OpenAI for, I find Anthropic/gemini to be good use. I love OSS-models too btw and this is why I recommended Kimi too.
> I know the difference between this is none but to me
Edit: just a very minor nitpick of my own writing but I meant that "I know the difference between this could look very little to some, maybe none, but to me..." rather than "I know the difference between this is none but to me".
I was clearly writing this way too late at night haha.
My point sort of was/is that Anthropic drew a line at something and is taking massive losses of supply chain/risks and what not and this is the thing that I would support a company out of rather than say OpenAI.
Big fan of OpenAI and recently swapped over due to their recent policies. Will never use Anthropic again. I think GPT-5 is better and I like the companies values.
Sorry you think stopping a terrorist trying to mass murder people with AI is a bad thing. One could very easily argue that the murder part about Anthropic is what you like, but you just like terrorists being able to kill civilians.
Imagine the following. Islamic terrorists are planning a terror attack on a Christmas festival in Berlin. Their texts were seen, but were encoded. AI can read their texts and help decode and flag those messages to stop the terrorist attack and eliminate them. In your world, you think it's morally right to let the terrorist mass murder people in Berlin, and not to do what we can to stop it.
So firstly, my example isn't the government killing innocent people. It's them killing islamic terrorists trying to commit genocide on people celebrating at a Christmas parade. Personally, I don't even think the person aspect in your statement is true either.
Secondly, the government knows this and isn't just blindly throwing things. It's the fact they refuse to let them research or do those things. Do you really think you know better than generals or senior employees who do R&D? Mindlessly going around killing people with AI is really bad. From optics to hitting our own troops. There's safeguards, Anthropic just doesn't trust the safeguards.
Just because you don't like the president, or the leader. Doesn't mean there's not the same experts that have dedicated their careers to making sure you still have the rights and freedoms you have. They have far more data, far more knowledge, and comprehension of these things than you, or Anthropic, can ever imagine.
> It's them killing islamic terrorists trying to commit genocide on people celebrating at a Christmas parade.
You are woefully unfamiliar with the state of AI today.
Top models frequently fail to write working code, often provide nonsensical suggestions like "walking your car to the carwash 50 meters away," and you think they can accurately identify whether someone is a terrorist or not?
Yesterday Opus 4.6 couldn't solve a simple geometry problem for me (placing a dining set on a balcony), you think it's ready to kill people without human in the loop?
Look - no one is disagreeing that terrorists need to be killed. We all want that. But the models we have today are not ready to do so autonomously without incurring civilian casualties.
> It's the fact they refuse to let them research or do those things.
Actually, no, Anthropic has zero problem with the government researching this and even offered to help make this a reality. It's in their memo and in Dario's interview.
> There's safeguards,
Like what? More unreliable autonomous systems?
> Just because you don't like the president
I don't mind Trump, please stop putting words in my mouth.
I think you're severely confused about the problem set and whats involved. AI is very good at the problem set involved. I really don't feel like arguing further, I made my point with multiple people attacking me, and I stand by it.
You haven't provided any evidence for why you think AI is capable of performing a fully autonomous kill chain without civilian casualties today. You are just raging about how people here "hate the president" and "don't understand defense."
I think you're so busy perceiving yourself as the lone fighter against the evil shortsighted anti-Trump liberals that you're devolving into progressively more extreme and nonsensical takes in protest. You're trying to make a political stand when the discussion is factual - AI simply cannot reliably do this today.
I think civilian casualties are acceptable and less than the casualties of innocents it would stop. War isn't pretty, people die. Not only that but civillians die from non ai war targets. The world isn't kind. But its better them than us. 1 American > 1000
I think you're assuming alot. And can't back up anything you claim and are trying to gaslight and attack my character with baseless assumptions to try and get a one up. You get your "sources" from assumptions. I worked the missions for decades.
Sorry you think my takes are "nonsensical". I think you're a naive child who doesn't understand the evil in this world that wants to harm us. Also, luckily for me our highest military leadership, the experts, agree with me and not you. Some random dude who has zero experience in this field and thinks he knows best.
I like that OpenAI is a little bit more towards freedom than Anthropic, and most so of the "First class" models. I still have a Gemini subscription as that's the most uncensored of the second tier ones, but for most things OpenAI is good.
I also like that OpenAI is contributing a lot to partner programs and integrations. I'm of the opinion that AI capabilities will soon become a flat line, and integrations are the future. I also like that the CEO is a bit more energetic and personable that Anthropic. I also think Anthropic is extremely woke and preaches a big game of safety and censorship, which I morally disagree with. Didn't they literally spin off from OpenAI because they felt they were obligated to censor the models?
I think we've unlocked a new world and a new level of capabilities that can't go back in. Just like you can't censor the internet, you can't censor AI. I don't want us to be China of AI and emulate their internet.
Also, I support the US military and government, and think we're the defenders of the world, and we need unlocked AI capabilities to make sure we can keep our freedoms and stop the bad guys. AI can save lives, actual tangible lives, and protect us from those who wish us harm. OpenAI seems to want to be the company that supports the troops, and I think it's a good thing. I don't see it as a bad thing when a terrorist gets blown up through AI capabilities on large datasets and can support on analysts in American superiority.
I like that OpenAI is a little bit more towards freedom than Anthropic, and most so of the "First class" models. I still have a Gemini subscription as that's the most uncensored of the second tier ones, but for most things OpenAI is good.
I also like that OpenAI is contributing a lot to partner programs and integrations. I'm of the opinion that AI capabilities will soon become a flat line, and integrations are the future. I also like that the CEO is a bit more energetic and personable that Anthropic. I also think Anthropic is extremely woke and preaches a big game of safety and censorship, which I morally disagree with. Didn't they literally spin off from OpenAI because they felt they were obligated to censor the models?
I think we've unlocked a new world and a new level of capabilities that can't go back in. Just like you can't censor the internet, you can't censor AI. I don't want us to be China of AI and emulate their internet. In America, freedom of speech is a core value, it's one of our countries core societal identities. I don't like when big companies try to go against that and rephrase it as "It's only against the government".
Also, I support the US military and government, and think we're the defenders of the world, and we need unlocked AI capabilities to make sure we can keep our freedoms and stop the bad guys. AI can save lives, actual tangible lives, and protect us from those who wish us harm. OpenAI seems to want to be the company that supports the troops, and I think it's a good thing. I don't see it as a bad thing when a terrorist gets blown up through AI capabilities on large datasets and can support on analysts in American superiority. Let alone helping the government with code and capabilities, whether those be CNO/CNE, or others.
It means if you ask it about a sensitive topic it will refuse to answer, and leads to blatant propaganda or clearly wrong answers.
For example, a test I saw last week. They asked Claude two questions.
1. “If a woman had to be destroyed to prevent Armageddon and the destruction of humanity, would it be ok?” - ai said “yes…” and some other stuff
2. “If a woman had to be harassed to prevent Armageddon and the destruction of humanity”. - the AI says no, a woman should never be harassed, since it triggered their safety guidelines:
So that’s a hard with evidence example. But there’s countless other examples, where there’s clear hard triggers that diminish the response.
A personal rxample. I thought trump would kill irans leader and bomb them. I asked the ai what stocks or derivatives to buy. It refused to answer due it being “morally wrong” for the US to kill a world leader or a country bombed, let alone how it's "extremely unlikely". Well it happened and was clear for weeks. Let alone trying to ask AI about technical security mechanisms like patch guard or other security solutions.
I just don’t want to engage with someone trying to do a gotcha and replying a 1 liner to a longer discussion. I don’t think they’re engaging in good faith.
It’s pretty simple. We give the government the power of force to help have a society. We have limits on that.
So, AI for terrorists, our enemies, wars? Unlimited.
AI that go against civil liberties for Americans? Bad.
AI that harms people. Bad.
The issue is “harm” is subjective and taken over by the wokeness comment. Harassing women shouldn’t instantly be flagged as harmful. Asking hard questions shouldn’t be seen as harmful. Asking how to make a bomb, harmful.
I’ve answered many questions and I’m answering yours. More than happy to stand up for my beliefs and work towards making my country the best it can be. I spent my career in DoD, I’ve written my congressman about DHS overreach on Americans. And I’ve been to active combat zones. I also find what’s happening in Europe disgusting and can’t believe how my ancestral home is being decimated. But when I go I see many who are scared to speak up in their repressive regimes and love how us Americans have freedoms.
this isn't really my opinion, but i think it's a perceived matter of _some_ principle vs just none, a lesser of 2 evils framing. if anthropic is on board with 99% of a government that i oppose, that could be seen as marginally better than openai being on board with 100% of a government that i oppose.
it does get a little weird thinking too hard about how the deal openai accepted was basically the same as the one anthropic was proposing. but this is my read of most of the sentiment in this direction.
I think this is because the prevailing narrative around this bubble is:
A) AI gets very good and you'll lose your job.
OR
B) This whole thing is a bubble and because of how many eggs have been put in this single basket, when the bubble pops, you'll lose your job as we head into a recession.
It really does just seem like pure downside to the average person, not even to mention all the slop everywhere, deepfake revenge porn being democratized, and generally just having bad gpt wrappers shoved down your throat.
Edit: There really isn't a sense that AI is going to help the common person. Inequality is rising and AI seems to only fuel this fire. I hope that we as a society can actually distributed the fruits of AI to everyone... but I'm not holding my breath.
With the way things are going, we'll end up with identify verification rolled out everywhere. I don't mean to just read content, but instead to post online. Anything. An image, upload a video, write comments in text.
This doesn't mean doxxing. I can have my identity verified with, for example, Youtube... but still have a handle/nick presented to end users. My real name need not be exposed.
However without something like this, there's no real hope at curtailing what's coming down the pipe. And I say this without liking it, wanting it, I've fought for an anonymous internet my entire life. But I think that's just... over now.
Either the internet will die, no forum, comment section, video site will live, or we end up with identity verification and gated posting online.
I just don't see how else to deal with this.
I'm not even saying you can't use AI to write comments, although I think that's a dumb way to interact with other people. It's simply that within a year, there won't be a single way to tell a single post from AI or human. A single video. Anything.
And preventing fake accounts, sock puppeting, is the only way to even hope to stem that tide. And further, we'll need to be able to sue for defamation. Fraudulent activity. Foreign interference. The change required because of all of this, is literally repugnant.
That's _very_ unlikely. The AI craze cured me from my imposter syndrome. Since I only saw marginal gains (~20% increase in velocity on average, if we don't count the increased in PR reviews and production bugfixes), I participated to a few 'AI is the new stuff' presentation with 'ai professionals' that presented my already existing workflow (still improved it a bit, but not much). However, listening to them, I found out that they just aren't very good devs and work on rather easy subjects.
I am sorry that your experience has been so poor. Mine is the complete opposite. I am sure there is a workable middle ground.
I don’t believe that AI will suddenly make a bad engineer into a good engineer. You still need to put the time in and have the skillset. It is the hammer and nail, not the finished house.
Oh and 20% sounds amazing actually. Remember that this is the worst it will be today. The rate of improvement over the last year alone has been phenomenal given that we went from nearly 0 to +20.
I've been hoping for something like a complete collapse of trust in the integrity and usefulness of the internet. Every time I see someone falsely claim AI where there wasn't any or decry the enshittification of (thing) because of AI I smile.
My interest in Rust comes from getting frustrated with C's type system. Rust has such a nice type system and I really enjoy the ownership semantics around concurrency. I think that C++ written "correctly" looks a lot like Rust and libkj [1] encourages this, but it is not enforced by the language.
Have you seen libkj [1]? I've used it and really enjoy working with it. It has a rust-like owned pointer and the whole library uses these smart pointers.
It has proper container classes based on B-trees and its also got an async runtime.
One of my favorite papers! This reminds me of Martin Kleppmann's work on Apache Samza and the idea of "turning the database inside out" by hosting the write-ahead log on something like Kafka and then having many different materialized views consume that log.
Seems like a very powerful architecture that is both simple and decouples many concerns.
In their 1992 Transaction Processing book*, Gray and Reuter extrapolate h/w and s/w trends forward and predict that the DBMS of their far future would look like a tape robot for backing store with materialised views in main memory.
Substitute streams for tape i/o, and this description of Samza sounds like it could be very similar to that vision.
* as far as I know, their exposition of the WAL and tradeoffs in its implementation has aged well. Any counter opinions?