Ha! I'm loving the fact that this thread started getting active again (no offense tomsang...but most of those links you posted were not very conversation inducing, although they were interesting :P).
So I'll go ahead and thank rebs for the outstanding link that picked this back up. That was a pretty well written article. I had to be careful not to go down a rabbit hole on that site. :)
So I said I will follow up, and here I am. First, I will provide some responses to a couple of the things brought up. Then I will provide some background on my foray into this field.
The first thing was brought up by several people, and it's a somewhat valid point. The physical limitations. But let's think about that for a moment. I'm not even talking about ASI at this point (that comes later), but rather AGI. For those of you who didn't read the articles, or aren't familiar with the term, that means an artificial intelligence that matches a human in computations and understanding, Artificial General Intelligence. So let's break out the facts from the speculation.
- Narrow AI, or AI that does something specific (i.e. Watson, Google, Google Now, Siri) currently exists
- Narrow AI gets better every day (although technically it gets better with every use).
- Our current and recent breakthroughs and benchmarks in computing power and capability are in large part due to parallelism, not fitting more transistors on a board.
- Our ability to manipulate matter at the molecular level is primitive at best.
If you've used a phone made in the past couple of years, you should know that the cool features behind it are made possible by narrow AI. We use it all the time, and when it works, it's practically invisible. Have you ever scanned a document into Word and made it editable? Uploaded it to Google Docs? Optical Character Recognition (OCR) is what makes that possible, and it was only able to reach the current level of accuracy because it was coded to learn
how to read written characters...it wasn't coded to read written characters. Think about that for a moment. We didn't have the time, the skill, the patience, and the ability to handle all of the different kinds of hand writing that exist. So we didn't bother. As the price of processing power continued to drop, we stopped being limited by the time it would take to learn. So those first two already exist, and I know that I use them multiple times per day (not so much OCR, because seriously...who deals with the hand written word anymore?).
So some people mentioned that exponential growth may not continue, and we don't see it coming because humans are very good at extrapolating patterns, even when there aren't any. The Ruthless Extrapolation article made some really good points, and had some pretty cool examples of when that was proven wrong. I don't know that it necessarily applies in this case, but rather than argue with it, let's assume we are wrong, and Moore's Law does not continue. Remember that Moore's Law (or the extrapolative version of that, Ray Kurzweil's Law of Accelerating Returns
) was initially based on the physical. The pure number of transistors that could be fit in the same amount of space. However, as someone pointed out, the fact that we are still meeting the estimated processing power has been in part due to parallelism. Some use this to show that this level can't continue since the underlying assumptions based on hardware are expected to slow (at least with out current method of manufacturing). That could be a valid point. Even Moore himself says:
It can't continue forever. The nature of exponentials is that you push them out and eventually disaster happens"
However, I would posit that in this particular case, it's not actually relevant to the technology required. Rackspace, Google Cloud Services, Amazon Web Services can all provide incredibly cheap and incredibly fast processing power. We already have enough linked and parallel processing power to power an AGI, we just haven't coded it yet. So the physical things that people worry about, and the arguments they raise against them are valid for now. However, that does not mean that damage can't be done
to our society and our way of life, even without physical bodies.
I have some actual work to do now, so I will come back later and address some other points that were brought up. Namely (and so I don't forget):
- Feedback and inputs required
- Anecdotal points regarding different fields and their progress
- Black box systems and examples
- Goal oriented programming
- Friendly AI