Edit: And if you believe in AI so strongly you can't be arsed to write your own articles, I don't see why you wouldn't just ask it to do the (obviously AI generated) optimization in the first place and not worry about the code.
> What do I do with those reports? Ignore them? Fix the bugs myself? Bleh.
"I don't have access to a test environment, but if you want to write a fix, let me know and I may be able to point you in the right direction" is a perfectly reasonable response.
If they're taking on verification, are they also taking on liability? Do we get to sue them if grandma gets scammed through an app they allow onto their phone?
If you drop the premise of writing, drop the premise that you need something well written. Just give me the same information you would have given the LLM.
But a non well-written prompt is not a good prompt. What are you really going to do with a shit prompt? It's meta: we need better writers all the way down.
I know what I'm trying to say, so I can sanity check the output. You can't, unless you listen to the monologue.
That's why I disagree with people that say "just give me whatever you gave the LLM." That's only useful if you, the writer of the prompt, have no intention of looking at the LLM output before sending it.
Do you really want to read the whole conversation between the author and computer? I don't use AI to write prose but if I did I'd treat it like a critical editor so reading all that would not save you time.
My website serving git that only works from Plan 9 is serving about a terabyte of web traffic monthly. Each page load is about 10 to 30 kilobytes. Do you think there's enough organic, non-scraper interest in the site that scrapers are a near-zero part of the cost?
Basically, the selling point of LLMs is that you no longer need to think about problems, you can skip directly to results. Anything that you have to think about while using them today is somewhere on the product roadmap, or will be.
reply