Some Thoughts on Thinking Machines

I’ve been thinking a lot about what I called “Getting Better at Getting Better” in my book. It’s the idea of accelerating machine intelligence, where computers aren’t just getting better at solving problems, but the pace at which they get better increases drastically. I think this comes in two forms: 1) improved machine learning that improves as we provide higher quantities of quality data, and 2) evolutionary algorithms that use evolution to innovate.

I’m also reading a book called What to Think About Machines That Think, which is a collection of short thoughts by dozens of experts in various fields on whether computers will soon be able to think like—or better than—humans.

This spawned a few ideas of my own on the topic, but not being an expert in the area I was at first reluctant to capture them. But then I remembered that I do that all the time, and that I just need to have an appropriate respect for my limitations. So here are some random ideas about the nature of human intelligence, whether machines will be able to achieve it, and similar topics.


  1. First, I don’t think human intelligence is all that special. I think it’s absolutely a matter of the number of connections, and this seems to be what we’re seeing by improving the complexity of our neural nets, which have yielded extraordinary results in Deep Learning.
  2. Second, consciousness, as many experts have alluded to in neuroscience, philosophy, etc., is not a single special thing that sits on top of a mountain, but rather an emergent property of multiple, segmented components in the human brain that reach a certain level of complexity. Or as Daniel Dennett says, it’s simply a bag of tricks. Further, it’s my belief that this strange emergent property provided advantage by allowing one to experience and assign blame and praise, which provided tremendous advantage to early adopters who were creating communities.
  3. Third, the core game to be considered when looking at whether AI will become human-like is not intelligence or consciousness, but rather goals. Humans are unique in that our goals come from evolution. At their center they are survival and reproduction, and every other aspiration or ambition sits on top of and secondary to those drives. So in order to make something like a human, it seems to me that you’d have to create something where every component of its being is steeped in a similar sauce. In other words, we were made over millions of years, step by step, with the goals of survival and reproduction guiding all successful iterations. So if we don’t want to get something extremely foreign to ourselves, we’ll need to somehow replicate that same process in machines. The alternative would be a painted-on vs. baked-in feeling to their goals and ambitions, which I’m not sure would feel as authentic.

__

I do a weekly show called Unsupervised Learning, where I curate the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.

Source: http://feeds.danielmiessler.com

Leave a Reply