Memristors are the silicon equivalent of neurons -- a time-dependent function with state. In mammals, the connectivity of millions of neurons enable the emergence of intelligence. I don't see any reason why a silicon-based neuron network of sufficient size isn't capable of the same.
Now, that's entirely different from purposeful design of an AI such that it speaks English and knows what I like for breakfast. I don't think we'll ever know how to sit down and write one in Notepad.
Instead, memristor-based AIs will be evolved using genetic algorithms or other evolutionary approaches. Yeah, maybe that's wishful thinking too, but that's how I see it playing out.
A memristor can be simulated by a couple of formulas and a few variables. (just like neural networks can be simulated).
In spite of our ability to simulate this sort of process for decades we have not succeeded in building a strong AI, the difference in having a hardware version is one of performance (just like Neural Network chips are typically a lot faster than their software simulated counterparts). There is no real difference in capability here, just a (possibly very large) speedup.
Now, I'm not ruling out that such a speedup will cause us to be able to create things that so far were not possible but I have a hard time convincing myself that this will almost certainly be the case.
I think there's a very real minimum speed limit necessary to keep a highly connected system like your brain, an AI, or the Internet operational. Does anyone seriously expect 'TCP over Carrier Pigeon (RFC1149)' to work in practice at scale? Or that your own wetware would've developed properly if it plodded along forever below 13 Hz?
Given that biological examples of a minimum speed limit for cognition exist, and that the behavior of computational networks at various speeds seem to follow the same behavior (slower=worse), then it seems reasonable to assume that a similar lower limit for cognition exists for silicon-based networks.
Therefore, faster devices such as memristors might be just the thing necessary to get our machines thinking, and moreover, that we may never see intelligent behaviors in slower simulated environments.
Note that even hardware speedup need insights to be successfully implemented. Seeing that hardware does improve at a regular pace, it looks like insights do pop up regularly.
Therefore, it isn't such a stretch to think that (i) insights may continue to pop up reliably for "a while", and (ii) not just in the domain of hardware speed-ups.
Now, that's entirely different from purposeful design of an AI such that it speaks English and knows what I like for breakfast. I don't think we'll ever know how to sit down and write one in Notepad.
Instead, memristor-based AIs will be evolved using genetic algorithms or other evolutionary approaches. Yeah, maybe that's wishful thinking too, but that's how I see it playing out.