Iversen wrote: ... so maybe it would be simpler just to ask why "to happen" only is allowed in the third person?
Cainntear wrote:Have you never happened upon structures like this...?
Not without "upon" (I also mentioned that "happen" occurs in the construction"make happen", but then you need "to make")
Iversen wrote:As for AI: when I first heard about it some years ago the explanation was that you let a computer run through thousands upon thousands of experiments which mostly fail, but then the computer learns which ones are the acceptable ones and build its next steps on that - and the process is so complicated that no human can follow it in detail.
Cainntear wrote:That's machine learning, not AI. But we call it AI in common parlance because that's sci-fi. And we call it AI in marketing because it sells well.
OK, correction - it was the second time. The first time I saw the expression used about attempts (or rather aspirations) to create artifical intelligence there were some people who thought it could and should be programmed in detail by a horde of engineers in white overalls. Among other things that was how some thought search machines and translations programs should be ccreated - but this would of course have been an unsurmountable task.
Then some clever people invented neural networks that could do countless failed experience
by themselves, get them evaluated somehow and learn from the the few that succeeded. The price was that no human being had an inkling about the internal rules the machine had developed before it could walk up steps or play chess. I have seen a very simple version of this with a virtual robot that should learn to walk - and the criterion was that it should move along and not tumble over. And it went through more weird walks than even John Cleese could have imagined, but ended up with a version that worked. I have also seen reports on programs that can propose diagnoses based on answers to questions and medical test results. Such programs have to be trained on loads of input plus some kind of success criterion and evaluation, but ultimately they learn to propose diagnoses that aren't worse than those you get from a short telephone consultation - at least that's the claim. And all those things were categorized as AI, but of course they build on machine learning (that's the new thing about neural networks: they can learn).
And now it seems that people try to connect a neural network with virtual tentacles that extend into the world wide internet in the hope that it will look as if they tapped the real world out there. And the result is chatbots that apparently already can fool a gullible Turing machine (otherwise nobody in the school systems would have reason to fear the chatboxes). The next step where the machines begin to 'think' unexpected thoughts by themselves and not just because they eat humanmade garbage, and maybe they'll also develop some kind of self reflection - and THEN I think we have to recognize those processes as a non-human kind of intelligent thinking.
By the way human thinking is just the result of a lot of neurons firing more or less haphazardly .. but we judge it on the output.