Computer programming is an awesome system, look at the world around us and tell me what you think. Of all the awesome ways we have applied computer programming the most important is the possibility of building real Strong artificial intelligence.
If you noticed I said the possibility because despite all the progress we have made in image recognition, speech recognition, language translation and even the scary looking Deep Reinforcement Learning based game playing, etc. we are still far away from anything close to what we humans possess.
The biggest problem with our current AI apart from all the others we talk about in the literature, like brittleness, etc. is the fact that we have to do so much to make it work.
The big win of modern AI techniques like deep learning is that it enables us to solve problems that would have been very hard to explicitly program. This leap of being able to represent the solution to a problem in an automatically generated model without explicit programming is one of the biggest technological leaps of our times, although this is not the first time we have come across these methods, what makes this special is that we have the matching computing power to realize these feats.
If we take a trip to the 60s when software engineering was gaining momentum, the idea of programmability already got us fantasizing of machines that could think just like humans. Our reasoning went like this: if we could program machines to perform many intellectual tasks like calculating and they did this better than us, then we could program machines to think better than humans.
During the following decades, while achieving many other technological feats powered by software, we were still trying to build artificially intelligent systems and time after time after the great effort we often ended with machines that would not just start.
This is like building a huge fantastic machine that doesn't come on when you push the start button. This went on till the 80s when we had expert systems and the same story of the machine not starting applies here too.
But with deeplearning powered AI, we decided to narrow our goals and achieve simple things like image recognition and language translation, etc. and this has been great so far, but as usual we are beginning to generalize deeplearning thinking that if we only scaled out such systems, then we would achieve Strong AI and eventually superintelligence.
This is no different than the dream fostered by Expert systems AI people in the 80s, they thought that all we needed was more knowledge and eventually, the system would have human-grade intelligence and as we know that didn't happen.
What I want to point out here is not the very specific limitations of deep learning because I know that many people have already dealt with this issue in other writings, what I want to bring to mind here is more primitive, the idea that we can even program a Strong AI.
When you hear Elon Musk talk about AI many people might say that he is mistaken about his fears but what Musk is afraid of is not our current systems but something that doesn't yet exist but is a real possibility since we cannot really predict who will make the necessary conceptual leap or when that conceptual leap will be made. That is his real fear. He emphasizes that we might just be the bootloader for such intelligence.
This requires that there is some software that becomes intelligent and lunches the engineering of its own intelligence on the platform that we have given it, eventually outgrowing the initial system to become some big bad system.
What Musk gets wrong is that this system might possess an intention just like we humans have and if that intention is not aligned with ours, then we might be no more than a house cat to such an intelligent system.
What I believe is that intention is not coupled with intelligence, which I can fairly define as the capacity to optimally achieve any goal. Movies have made us weld the idea of an intention to intelligence, but in reality, we can have a machine of near-infinite capabilities that doesn't possess a self or personality or intention or goal.
Such a raw intelligence would just be like an omnipotent search engine that spews out answers to any problem we have from designing better aircraft all alone without supervision or drawing up a plan for interstellar travel that is superior to anything any human or group of humans could come up with.
Back to the main issue of the post, can we program such a system? If we think we can do that because we are able to build a Deep-RL system that defeats a human at a human contrived game of whatever complexity, then I say that the true problem of our age is our ego not necessarily our lack of superior intelligence.
The modern programming languages we use are even clutches without which we wouldn't be able to express the kind of high-level ideas that have led us to build all the tech infrastructure we possess. I am not looking down on human intelligence and it's ability to create muscle amplification tools and now mind amplification tools like computers and programming technology, what I am bringing our attention to is the fact that we might not possess the kind of mind that can directly move from programming code to AI, we might need to build other clutches beyond programming languages.
This might not be too far fetched as we might think initially and the root ideas behind computing might help us jump over this cliff to the green grassy plains ahead.
The deep idea behind computing can be found in Alan Turing original paper describing universal computing. The Turing machine is an idealization of a universal computer and without going into too many details we should go directly into the ideas it exposes.
The fundamental requirement for computing is a manipulatable memory, something that holds states and transforms these states using rules that's all it is.
If we get into practical computing, then we can see computing as mutating structures because even though at the level of the machine an executing program goes one instruction at a time, when we are thinking of higher-level tasks we are actually mutating large structures in one fell swoop like for example when operating on some kind of advanced data structures.
Most of computer programming is mutating structures, or things composed in some kind of hierarchical form, where some parts stay fixed and some others are changed. This is as abstractly as I can define what software engineering is all about because by understanding things abstractly we can drill down to the essence and make conceptual leaps.
The next level of the clutch we have to build beyond programming languages is systems that can define executable solutions by automatically searching for structures that represent these solutions. Once we kind build this new level using our human-engineered programming languages then we would have an infrastructure that we can direct towards realizing the kind of Strong AI we are dreaming of without explicitly programming it but by searching for it.
If you noticed I said the possibility because despite all the progress we have made in image recognition, speech recognition, language translation and even the scary looking Deep Reinforcement Learning based game playing, etc. we are still far away from anything close to what we humans possess.
The biggest problem with our current AI apart from all the others we talk about in the literature, like brittleness, etc. is the fact that we have to do so much to make it work.
The big win of modern AI techniques like deep learning is that it enables us to solve problems that would have been very hard to explicitly program. This leap of being able to represent the solution to a problem in an automatically generated model without explicit programming is one of the biggest technological leaps of our times, although this is not the first time we have come across these methods, what makes this special is that we have the matching computing power to realize these feats.
If we take a trip to the 60s when software engineering was gaining momentum, the idea of programmability already got us fantasizing of machines that could think just like humans. Our reasoning went like this: if we could program machines to perform many intellectual tasks like calculating and they did this better than us, then we could program machines to think better than humans.
During the following decades, while achieving many other technological feats powered by software, we were still trying to build artificially intelligent systems and time after time after the great effort we often ended with machines that would not just start.
This is like building a huge fantastic machine that doesn't come on when you push the start button. This went on till the 80s when we had expert systems and the same story of the machine not starting applies here too.
But with deeplearning powered AI, we decided to narrow our goals and achieve simple things like image recognition and language translation, etc. and this has been great so far, but as usual we are beginning to generalize deeplearning thinking that if we only scaled out such systems, then we would achieve Strong AI and eventually superintelligence.
This is no different than the dream fostered by Expert systems AI people in the 80s, they thought that all we needed was more knowledge and eventually, the system would have human-grade intelligence and as we know that didn't happen.
What I want to point out here is not the very specific limitations of deep learning because I know that many people have already dealt with this issue in other writings, what I want to bring to mind here is more primitive, the idea that we can even program a Strong AI.
When you hear Elon Musk talk about AI many people might say that he is mistaken about his fears but what Musk is afraid of is not our current systems but something that doesn't yet exist but is a real possibility since we cannot really predict who will make the necessary conceptual leap or when that conceptual leap will be made. That is his real fear. He emphasizes that we might just be the bootloader for such intelligence.
This requires that there is some software that becomes intelligent and lunches the engineering of its own intelligence on the platform that we have given it, eventually outgrowing the initial system to become some big bad system.
What Musk gets wrong is that this system might possess an intention just like we humans have and if that intention is not aligned with ours, then we might be no more than a house cat to such an intelligent system.
What I believe is that intention is not coupled with intelligence, which I can fairly define as the capacity to optimally achieve any goal. Movies have made us weld the idea of an intention to intelligence, but in reality, we can have a machine of near-infinite capabilities that doesn't possess a self or personality or intention or goal.
Such a raw intelligence would just be like an omnipotent search engine that spews out answers to any problem we have from designing better aircraft all alone without supervision or drawing up a plan for interstellar travel that is superior to anything any human or group of humans could come up with.
Back to the main issue of the post, can we program such a system? If we think we can do that because we are able to build a Deep-RL system that defeats a human at a human contrived game of whatever complexity, then I say that the true problem of our age is our ego not necessarily our lack of superior intelligence.
The modern programming languages we use are even clutches without which we wouldn't be able to express the kind of high-level ideas that have led us to build all the tech infrastructure we possess. I am not looking down on human intelligence and it's ability to create muscle amplification tools and now mind amplification tools like computers and programming technology, what I am bringing our attention to is the fact that we might not possess the kind of mind that can directly move from programming code to AI, we might need to build other clutches beyond programming languages.
This might not be too far fetched as we might think initially and the root ideas behind computing might help us jump over this cliff to the green grassy plains ahead.
The deep idea behind computing can be found in Alan Turing original paper describing universal computing. The Turing machine is an idealization of a universal computer and without going into too many details we should go directly into the ideas it exposes.
The fundamental requirement for computing is a manipulatable memory, something that holds states and transforms these states using rules that's all it is.
If we get into practical computing, then we can see computing as mutating structures because even though at the level of the machine an executing program goes one instruction at a time, when we are thinking of higher-level tasks we are actually mutating large structures in one fell swoop like for example when operating on some kind of advanced data structures.
Most of computer programming is mutating structures, or things composed in some kind of hierarchical form, where some parts stay fixed and some others are changed. This is as abstractly as I can define what software engineering is all about because by understanding things abstractly we can drill down to the essence and make conceptual leaps.
The next level of the clutch we have to build beyond programming languages is systems that can define executable solutions by automatically searching for structures that represent these solutions. Once we kind build this new level using our human-engineered programming languages then we would have an infrastructure that we can direct towards realizing the kind of Strong AI we are dreaming of without explicitly programming it but by searching for it.
Comments
Post a Comment