Wow - this has got to be one of the most interesting discussions I have read in a long time.
Reading through it all, I feel like there are couple of basic things that are being missed and I am curious what everyone thinks of them:
1) There seems to be an assumption that if a person/computer is smart enough everything can be figured out absent real world experimentation Einstein style. This seems like a pretty big jump to make. In trying to move up the exponential curve of intelligence/knowledge wouldn't there still be a need for real world experimentation? Giving all human knowledge at the moment, I'm sure a super-smart being could figure out some things we don't yet know, but everything? Everything needed to alter atoms and nanobots? I'm sure there could be several conflicting theories of reality that could explain everything we "know" that could only be teased out through experimentation. Experimentation that I have a hard time seeing happening on an exponentially faster time table. Like the search for the Higgs Boson, isn't it possible that as you get into quantum mechanics some of these experiments would require space and time that can't just scale exponentially?
2) Once the AI does start editing itself, selecting whether an improvement is better or not seems like a fundamentally trickier problem than it is being discussed. As someone said previously, if you gave a human the ability to edit their own neurons, what's most likely is that you would end up with a dead human. It seems possible that a being with "n" intelligence couldn't know what change would be required for n+1 intelligence. So they would be stuck experimenting and most of those experiments seem likely to end up with a dead/endlessly recursive program. Given that the more complex a being gets, the more likely a change is to be detrimental, the harder and longer it will be to stumble upon a change that is "better". This implies to me that at a certain point the intelligence curve will flatten rather than get steeper.
3) Even if the change is "better", better seems very hard to interpret/select for. From what I've been reading, human forgetfulness isn't so much a bug of our brains but rather a feature. Part of how we isolate the signal from the noise. Would a computer need to get more "forgetful" in order to get better at figuring out the nature of the world? So many of these problems seem like the NP vs P concept in computer programming. Even if the computers get faster, if the problem is NP type, does it really matter? Getting back to the experimentation question from 1 - if the AI did an experiment, could the computer know when to say "this seems likely" and forget the uncertainty and move on, or would it get stuck exhaustively running experiments to prove that something is "true"? So if the program evolves to be more forgetful but more intuitive, is this better? I could imagine several "intelligence" trade-offs you'd need to make between rigorous knowledge and making a guess with the right balance difficult to determine until well down the line... Is intelligence really linear where there is a clear "better" in all situations for all types of problems that can easily be selected for?
Thoughts?
While every single one of your points in potentially (and probably) valid, computers don't need to be dramatically "smarter" than people to displace labor as we know it. They only have to be good enough to do their particular job. We are already basically at the point where robot drivers are better than human drivers, even if a Google Car can't also play the piano or file your taxes. Even if the robot isn't better at a job, if it is even as good - or even close to as good - but only demands a "salary" of a few dollars of electricity each day, the robot gets the job.
Computers may never be able to solve science questions that we haven't solved yet without experimentation, but few jobs have ever depended on the verification or falsification of quark theory
Thats a good point, especially as it relates to the impact on jobs, if not the more mercurial singularity place that I go. Thanks for jumping back in and grounding me a bit. Going a bit further on this, looking at current deep learning trends (like the 'black box' type of just showing results, and the system somehow figuring things out better than we can - See the
2012 Google Science Fair Winner for a 3 year old example of the actual tech) even if a position is replaced by a robot or computer that does exactly as well as a human, I wouldn't foresee it staying that way for very long. I don't want to say creative, but the kind of jobs that we can't make an algorithm for currently (I'll think of more actual examples later), kind of like a surgeon, would be most affected by this.
Doubling down on the idea of a doctor, I could completely envision a scenario like this (and most of the numbers are made up - I don't have time right now to really look them all up):
From a Hospital (v1.0):
Finally, there is a robot in a hospital near you that can match a trauma surgeon in success rates. As our decorated current doctor is good enough to have a 70% full recovery rate, so can our new bot. Be part of the future.
From the military (v1.0):
Corpsman are in short supply, and we can now get an Automated Trauma Surgeon delivered to the actual site of an IED explosion. ATS (or bATSman as the troops are beginning to call it) will reduce the permanent damage to our troops. Right now, we are able to match a major cities full 70% success rate, but here in the butthole of whatever oil-laden area we are currently liberating.
Within a few weeks or months, the military bot will have had a lot of experience handling trauma. In the hospital, I imagine the majority of people still would want a person. With an upgrade process based only on the metrics the bot takes on hand, there will be the possibility of the military deep learning bit to progress to v1.5 with a success rate of, say 75%. Then that software is uploaded from the military to the hospital bots. The new announcement would read:
Finally, there is a robot in a hospital near you that can exceed the success rate of our highly decorated trauma surgeon. Do you want to have a 3/4 chance of full recover, or would you rather get the human touch and limit yourself to 7/10?
Yeah, it's not as polished as it would be, but I think it would happen similarly to that.
Yes going from purely digital to physical manipulation may be hard (obligatory: http://what-if.xkcd.com/5/). In theory production machinery and computer controlled physical infrastructure should be air gapped (https://en.wikipedia.org/wiki/Air_gap_(networking)) for boring old computer security reasons but this is very often not the case. I suspect that http://www.rethinkrobotics.com/baxter/ will not be air gaped for remote updates - but I really have no idea about them specifically.
hope to read/post more when I get home.
I loved that What If. Made me giggle. As far as air gapping goes, I view it as something that the AI Ethics board would want to bring up or put in place, but to do that, one would lose a lot of advantage in creating one (or more). The science fair project I mentioned above couldn't be air gapped and succeed.