I grow tired of Kurzweil's vague arguments against people who disagree with his vague predictions.
What I think Kurzweil doesn't understand is that in any argument about what's going to happen in the future, the onus of proof inevitably lies with the guy saying "This is what's going to happen", not the guy saying "Ehh, maybe not".
I don't know what's going to happen in the future, and I don't pretend to know what's going to happen in the future, but whatever happens either (a) I'll find out eventually or (b) it'll happen after I'm dead anyway. But john_b's point about Kurzweil's lack of a null hypothesis is a good one.
So my question for Kurzweil is this: what will the world look like if you're wrong? What possibilities are your predictions excluding? If I'm still alive in 2060, and I look around at the world around me, under precisely what conditions am I entitled to say "Well whaddya know, looks like Kurzweil was wrong about that Singularity thing after all"?
> the onus of proof inevitably lies with the guy saying "This is what's going to happen", not the guy saying "Ehh, maybe not".
I'd say that the onus lies on the one making conjunctions instead of disjunctions. Often, negative predictions are disjunctions, but this isn't always the case: Compare "in 2100, North America will be inhabited by humans" with "Ehh, maybe not."
>but whatever happens either (a) I'll find out eventually or (b) it'll happen after I'm dead anyway
It's not your main point, but I think you're leaving out an important option: (c) you can choose to be intimately involved in making the future turn out a certain way.
The best reason to think and write about the future is so that we can decide what future we want to create for ourselves. Then, as someone with the ability to write code, you can go out and create those very things.
IMHO, the future cannot be predicted. Period. I'm not talking impossible-as-in-hard. I am talking impossible-as-in-perpetual-motion.
Nice well-behaved linear Newtonian systems can be modeled and predicted. There are systems that are chaotic but that can be modeled in the aggregate very well too, like thermodynamic systems and certain kinds of fluid flow.
Life isn't like any of that. Life is complex, chaotic, computationally irreducible, and full of feedback loops on top of feedback loops. Even worse: predictions often create economic incentives to prove them wrong. Take a position on the stock market and you have created an incentive for your prediction to not come true.
People have always wanted to deny the fundamental unpredictability of history, and have always clung to woo-woo prophecy superstitions toward this end. The ancients had Tarot cards and pig entrails. We have graphs and computer models.
Of course, predicting the future with 100% certainty is impossible. But that doesn't mean that making predictions is a mug's game. Predicting the future, with appropriate levels of uncertainty, is a very sensible thing to do with your time. I, for instance, predict that if I wander down to Cheeseboard in twenty minutes I'll find that they're selling delicious pizza. And I predict that if I eat that pizza, then it won't poison me. These are all useful predictions, which may be wrong, but are useful nonetheless.
It's only when you start sticking inappropriate error bars on your predictions that it becomes a problem. Kurzweil predicts things which are unlikely or perhaps impossible as having probabilities near 100%.
I would say that the ability to predict the future is indeed a large part of what we call intelligence. Note that 'intelligent' is a relative term, however. You are considered 'intelligent' if you are able to predict the behavior of a system at a success-rate significantly higher than the average observer, given similar or equivalent prior knowledge about the system.
What I think Kurzweil doesn't understand is that in any argument about what's going to happen in the future, the onus of proof inevitably lies with the guy saying "This is what's going to happen", not the guy saying "Ehh, maybe not".
I don't know what's going to happen in the future, and I don't pretend to know what's going to happen in the future, but whatever happens either (a) I'll find out eventually or (b) it'll happen after I'm dead anyway. But john_b's point about Kurzweil's lack of a null hypothesis is a good one.
So my question for Kurzweil is this: what will the world look like if you're wrong? What possibilities are your predictions excluding? If I'm still alive in 2060, and I look around at the world around me, under precisely what conditions am I entitled to say "Well whaddya know, looks like Kurzweil was wrong about that Singularity thing after all"?