“Artificial Intelligence is a little software and a lot of data.”
Yorik Wilks quoted in David Levy’s Robots Unlimited, 2006
From Wilks’ quote, it looks like the AI people (all of them!) are closer and closer to finally “getting it”, as it were. 1 I don’t mean to sound as though I had the answer all along or something, but…yeah, I kinda did.
Haha, no, I don’t really know anything about AI. So if I say anything wrong, please correct me. But my shallow knowledge of the history of the field indicates that initially, almost everyone in it was trying the opposite approach. Everyone was trying to get it all done with next to no data, by inference alone. Kind of like the content-free, knowledge-free, learning-free, anti-memorization “critical thinking” fad that’s got the education establishment by the ‘nads right now.
So we thought we would just have programs infer the truth using basically (predicate) logic alone. And there were some very interesting early successes, like ELIZA (the psychotherapist simulator program), written in the mid-1960s — back when a kilobyte actually meant something and a gigabyte was nothing but a pipe dream — so no Big Data, not much data period, mostly inference, but she passes that Turing Test 2 with flying colors; many people refuse to believe she’s a machine…
Now, this is not just about me trying to look erudite by quoting third-hand information I got from a David Levy book or an Eliezer Yudkowsky talk or a Ray Kurzweil video. So to the point let me get.
Trying to learn a language by grammar rules is the same. As what? As trying to implement artificial intelligence (or something approaching it) by inference alone. The idea with the learn-by-grammar-rules school of thought is: “we just give you the rules and you memorize these rules and run through them to derive/infer the rest = any utterance you want to make”.
Well, you can’t do this. You can’t. It doesn’t work. It’s a beautifully efficient and elegant idea, and it isn’t worth a damn because it does. Not. Work. It doesn’t really even work on computers (the neural networks that block your credit card whenever you make a weird-looking purchase work off buttloads and craptons of “learning/training data”), and it definitely doesn’t work on your organic brain and body. You have to see the patterns. Tens of thousands of times. Only then will the rule (and all its little nuances and contingencies) effortlessly form itself in your head.
I sometimes hear people say things like:
“I’m good at French, I just don’t have a vocabulary.” 3
No, you’re bad at French.
“I have a big vocabulary, I just don’t know how to use it.”
No, you’re bad at French. OK, I’m just being mean and flippant. But what I’m trying to get at is…
Part of knowing a language is knowing the words and how and where to use them.
Part of having a vocabulary is knowing how to use it…if you don’t know how to use it…you don’t own it….
If it’s not second nature yet, then you don’t know it. It’s not that you’re “bad” at the language…you’re just not used to it yet, and that’s basically the same thing. Because being good at something is being used to it.
It all comes back to the too much technique (logic, programming) and too little volume (data, exposure, immersion) issue. As Yorik Wilks might put it, learning a language is a little SRS and a lot of immersion, and SRS is a little explanation (next to 0) and a lot of examples.
It seems clear to me that we humans do not infer using explicit rules, we infer by analogy, by example, and the “rules” are implied by the volume of precedent; the rule is an emergent property; the rule emerges from reality, not reality from the rule…a lot like that unwritten British Constitution, really. 4
And that, friends, is why no one can use a Linux man page to save their lives…Linux man pages are useless because they (typically) show general rules but not specific examples. Closer to home, anyone who has taught English for a living or simply been asked an English grammar question by (let’s say) a Japanese person knows that you come up with the explicit rule on the spot, for the first time, because you were asked for it, after a lifetime of reality and examples and implication. 5 You only even realize there’s a rule (of sorts) because you were asked for it. And even then, some things are highly resistant to being expressed as “rules”. Quick, tell me how and when to use “the” instead of “a” — no examples, that’s gauche and primitive and “Oriental“ 6, just give me an elegant rule I can remember. La logique über alles!
Yeah, good luck with that 😉 .
- I hear that the leading edge in machine translation has given up the dictionary-and-grammar-rule ghost and is moving towards just using loads and loads of example sentences. It’s not elegant, but it sounds effective. ↩
- informally, at least…not the official one ↩
- I hear people say that because I may or may not at one point have been one of the people saying it. So it especially catches my ear when I hear other people say it. ↩
- Blah blah, Magna Carta, spare me your geeky exceptions. Speaking of the connection between the law and AI/programming, there was an interesting account here (somewhere in comments, can’t find it now) of a program written for a state(?) government in the US that was so good, so logical, so clear and complete that it eventually got converted (“ported”, if you will) line for line into law. ↩
- More interesting case: Do you have a language teaching certification? That’s an even better example — you’re probably one of the few English teachers, nay, English speakers, who actually knows the “rules” of the language, but you could already speak and use this language fluently before you learned (learnt? I’m getting self-conscious again 😉 ) them. If the rules had been essential to you knowing and using English, you’d have been unable to take your certification course, let alone live your daily life. ↩
- I KNOW, right?!: “The whole blueprint of school procedure is Egyptian, not Greek or Roman.” — I think John severely dropped the ball on that one. But then, when your work’s as good as his, you’re allowed a slip-up. ↩