But are you ok with the trendline of ai improvement? The speed of improvement indicates humans will only get further and further removed from the loop.
I see posts like your all the time comforting themselves that humans still matter, and every-time people like you are describing a human owning an ever shrinking section of the problem space.
It used to be the case that the labs were prioritising replacing human creativity, e.g. generative art, video, writing. However, they are coming to realise that just isn't a profitable approach. The most profitable goal is actually the most human-oriented one: the AI becomes an extraordinarily powerful tool that may be able to one-shot particular tasks. But the design of the task itself is still very human, and there is no incentive to replace that part. Researchers talk a bit less about AGI now because it's a pointless goal. Alignment is more lucrative.
Basically, executives want to replace workers, not themselves.
On the contrary the depth and breadth we're becoming able to handle agentically now in software is growing very rapidly, to the point where in the last 3 months the industry has undergone a big transformation and our job functions are fundamentally starting to change. As a software engineer I feel increasingly like AGI will be a real thing within the next few years, and it's going to affect everyone.
If you look at those operating at the bleeding edge, it doesn't look anything like yesteryear. It's a real step change. Fully autonomous agentic software engineering is becoming a reality. While still in its infancy, some results are starting to be made public, and it's mind boggling. We're transitioning to a full agent-only workflow in my team at work. The engineering task has shifted from writing code to harness engineering, and essentially building a system that can safely build itself to a high quality given business requirements.
Up until recently I kinda feel like the scepticism was warranted, but after building my own harness that can autonomously produce decent quality software (at least for toy problem scale, granted), and getting hands on with autoresearch via writing a set of skills for it https://github.com/james-s-tayler/lazy-developer, I feel fundamentally different about software engineering than I did until relatively recently.
If you look at the step change from Sonnet 4.5 to Opus 4.5 and what that unlocked, and consider the rumoured Mythos model is apparently not just an incremental improvement, but another step change. Then pair it with infrastructure for operating agents at scale like https://github.com/paperclipai/paperclip and SOTA harnesses like the ones being written about on the blogs of the frontier labs... I mean... you tell me what you think is coming down the pipe?
Humans needing to ask new question due to curiosity push the boundaries further, find new directions, ways or motivations to explore, maybe invent new spaces to explore. LLMs are just tools that people use. When people are no longer needed AI serves no purpose at all.
People can use other people as tools. An LLM being a tool does not preclude it from replacing people.
Ultimately it’s a volume problem. You need at least one person to initialize the LLM. But after that, in theory, a future LLM can replace all people with the exception of the person who initializes the LLM.
I see posts like your all the time comforting themselves that humans still matter, and every-time people like you are describing a human owning an ever shrinking section of the problem space.