Listening to great thinkers can educate us not only with knowledge but visions of the future. For much of "the future" is already here; it's just unevenly distributed. Many people don't know things yet despite their existence.
Tom Wolfe paid close attention to everything. His careful observations gird and illuminate his writing. During the last two decades of Wolfe's life (2000-2018) he became convinced that neuroscience (study of the brain) would soon revolutionize the world.
Wolfe was right, but in an unexpected way.
Artificial intelligence (AI) will change life dramatically in the next decade. We've all heard this prediction but few of us grasp what AI is, where it came from and what it's going to do.
An excellent primer on these subjects was just delivered by Prof. Geoffrey Hinton, an early AI researcher who was recently awarded the Nobel Prize in Physics. Prof. Hinton's explanation appears on Jon Stewart's weekly podcast (available on YouTube, etc.).
AI grew out of research Prof. Hinton and others commenced in the 1970s. They were studying how the human brain works and wondering if they could get a computer to mimic it. Obviously, at the time, computing power was rudimentary. Hinton, et al. developed valuable insights into the process of learning. They took lessons from human biology and abstracted them into potential mechanics. Essentially they discovered how a brain learns things and then applied that process to machines. Their discovery radically breaks from traditional computer science whose model simply instructs machines with rules to follow. No traditional computer, using the latter approach, can ever LEARN; it can merely compute.
Prof. Hinton posited in the 1980s that a machine could learn if taught how to learn and then set free to do its magic. The model -- a human brain -- reveals the learning process. Using a "neural network" the brain (or AI computer) contains trillions of individual switches that can be turned on or off. The probability of switches being turned on can be increased or decreased by what other nearby switches are doing. Once enough switches turn on, simple knowledge is obtained. As more switches are turned on, deeper knowledge is reached. But how can computers learn to do that?
By working backward. By taking actual known knowledge and reverse engineering a machine's neural network with better probabilities. Hinton realized -- in a eureka moment -- how the process of learning can be replicated in an inanimate machine.
This revolutionary discovery was knee-capped in 1986 by two things: inadequate computing power and insufficient material (known knowledge) to feed machines. Since then, of course, computing speed has increased exponentially. And content (material used to feed computers) was solved by development of the Internet.
Now Hinton's discovery, foretold by Tom Wolfe, is possible. And the race is on. Billions of dollars are being spent by dozens of large corporations to build AI machines. They differ mostly in what the computers are fed, not the computers themselves, since everyone can use the same advanced machines. AI programs differ in what their input is and how they consume it.
So that's where we are. Any questions?
This is a really insightful breakdown of the history of AI. I always appreciate a post that traces an idea back to its origins, especially linking it to great thinkers like Tom Wolfe and pioneers like Prof. Geoffrey Hinton.
ReplyDeleteI think the point you make about how AI radically breaks from traditional computer science is key. The difference between instructing a machine with rules and teaching it how to learn that's the core conceptual leap. It's fascinating to realize that the fundamental mechanism for our future was essentially a reverse engineering of the human brain's neural network.
It makes you wonder what other natural processes are waiting to be abstracted into the next technological revolution.
Absolutely right. You grasp exactly what I am trying to explain.
DeleteThose of us who study biology -- including the fantastic abilities of insects and plants -- are excited to see if we can replicate such magic with machinery.
It's certainly "interesting times" as the old saying goes. BTW, was your 'not evenly distributed' quote from William Gibson? 🤔
ReplyDeleteCurrently, the AI we have now seems very much on the copying route - not to say that that's not impressive, in terms of engineering/software systems.
Actual creativity still seems to be the remit of humans, and current creative thought experiments have the AI losing the plot after a dozen questions.
Interestingly, Duolingo dropped a loaf of the staff as AI hype kicked in.... only to rehire later. I don't know if that's 2+2=5, though.
I did read two things that made me think about AI and using it. One was because it can only rehash what's here. Metaphorically it'll help you make a better candle, but it'll never invent the lightbulb.
The other was a small study - the author said the target group is too small - saying that using AI to write your essay, means the student struggles to retain information from the study topic. I wonder if that's the 'spell check effect'?
Early days, but it's here and technology does move at a pace.
You make good points but what worries some is how accelerated the pace of further capabilities may be. Things could get hairy quickly, certainly before government has any chance to regulate or limit the technology.
DeleteThis is such a clear and compelling summary. I love how you’ve tied Tom Wolfe’s vision to what Geoffrey Hinton set in motion decades ago. It’s striking how an idea can sit dormant until the world catches up with it. The way AI has shifted from theory to reality really does feel like that “unevenly distributed future” finally arriving.
ReplyDeleteLotte x
Exactly. Your perception is laudable.
DeleteThis is a great summary. I still absolutely loathe AI though and I'm fed up of lazy colleagues using it all the time to do things they should be doing with their brain and the amount of power it requires!
ReplyDeleteI'm 100% with you on that. The lazier people act, the dumber they become. Exercising our mind is essential.
DeleteGreat explanation! I studied Compu-Linguistics in the late 90s, so I've heard of some of this, plus I work with a ton of AI experts in my job (not that use AI - I do believe that humans should do creative things, not computers, which are tools).
ReplyDelete