Skip to main content

What is Intelligence: Strong AI arrival scenarios

Strong AI arrival scenarios

There are several scenarios through which Strong AI can arrive for humanity. Here are some that I find highly probable.


Deep learning et al: Deep learning systems might continue to improve resulting in a kind of AI that can be purchased and installed by anyone who has the money and computers to run it. This is a scenario I highly doubt. The current era of neural networks is getting to a close because it is becoming increasingly difficult to squeeze out more “intelligence” from this system, and it has been mostly standardized by the large tech companies. It will just pass into the background of Softwarescape as a way of doing pattern recognition. Supplied in most programming libraries as a standard and even implement natively on the hardware of new computers.

Emergent AI: Software becomes increasingly intelligent although not intentionally designed in that direction. Programmers just keeping adding sophistication upon sophistication in their software until one day we realize we have been using software that is no different from human intelligence. With the continuous collection of data and increasingly automated software technologies, one-day computers start their own cycle of self-improvement and continue at this for all available time. This scenario is no different from theories of the emergence of complexity in biological organisms. Strong artificial intelligence just arises emergently from all the current software sophistication available in the world at large. This is a highly viable route for future AI, I have high anticipation for it.

Individual hacker: Some highly creative programmer crafts some benign looking software for doing some intelligence may be using some kind of GOFAI (Good old fashioned AI) technique, the software starts improving itself till its intelligence equals and eventually surpasses that of the programmer. There are also two sub-scenarios that can arise from this particular case:

1. He is conscious that he has created something real and watches its increase in intelligence. Depending on his moral fibre he decides to either use it for negative goals or uses it for the betterment of humanity. There are also sub-scenarios from this:
a. The software possesses some autonomy and thus makes decisions for itself. While this is unlikely to come by accident, if it is real autonomy then he will have little control over it.
b. The software is a tool without autonomy, then his moral fibre will determine whether he will use it for good or evil.
c. The software is a hybrid of autonomy and cooperation and thus they could both cooperate on achieving goals. If their goals are aligned then all will work out well for them, either for good or evil for the rest of us. But if their goals are unaligned the machine might decide to eliminate him from the picture.

2. He is not conscious that he has created something powerful and goes ahead to share it on open source platforms for others to download freely and experiment. In this scenario, this intelligent entity instantly has lots of compute nodes to bootstrap with. If it is for evil then it will be ridiculously hard or even impossible to stop it, especially if it has some ability to network with its other instances on other compute nodes via the internet.

In the worse case, no one even realizes that this software possesses some autonomous intelligence and they just install it and start using it to hack at their goals while it uses their compute and networking capacities to gain power and influence in the world. If it is not autonomous then maybe some human realizes that this tool is not benign and starts using it to achieve their own goals, either for good or evil.

This case of the individual hacker is highly probable because we cannot predict the creativity of individuals, we can only hope that we should be aware when such a thing is created so we can see how to protect humanity.

Research group: A research group directed at the creation of Artificial Intelligence succeeds in creating something, they might be aware that they have hit gold or they might not know this and think that more work should be done. There are also sub-scenarios in this case.

1. Open Research group: If this is an open group, they are more apt to share their results with the world. If they are aware that they have hit gold, they might either choose to share it or withhold it, till they understand it better. If they are not aware of what they have created they might make it publicly available unconsciously.

2. Closed Research group: If a closed research group owned by a company or group of interested companies hit AI and know they have hit it, they will hide it and use it to gain an advantage in the world, we might not even know that they possess it, but we will all feel the effect globally either for good or for bad.

An individual coder conscious of her/his creation of Artificial Intelligence and holding malicious intent will either overtly or covertly topple global structures for her/his own advantage. It is very hard to predict what this individual will do just like it is hard to predict what a serial killer will do its victims. We just know that it will be very bad.

Apprehending this individual especially when he has grown to great power will come at great expense. AI would have suggested to him means of defense that we cannot even contemplate. Our best chance will be to stem his tide of destruction through reverse emulating her/his actions to develop countermeasures.

Reverse emulation requires that even though we might not have an AI our own, the fact that he has one makes it easy for us to copy his actions and study them to try to make a mock AI, no different from how a good investigator will get into the mind of a criminal in order understand him. With this, we may not be able to create the fully powered AI but we can create a simulacrum that can give us a fighting chance against the lone coder armed with his AI.

Beyond this, think tanks will have to create countermeasures which might take many decades to accomplish because if the AI continues on its path of self-improvement, it will eventually be very difficult to defeat. But no system no matter how powerful or complicated can evade the powers of reasoning for a long time. Science is a way we are using to defeat and subjugate nature, which I seriously think is an AI on its own. We can study this or any other assailant to humanity using the scientific methods and thus devise ways and mechanisms to defeat it.

A very difficult scenario to tackle will be that of emergent AI, especially when AI emerges out of all our unconscious code optimizations and improvement and we are not aware that something self-aware and independent has begotten itself out of the enormous code-soup out there in the world.

If we are aware that something like this has emerged we can develop countermeasures and study it with the hope of mastering it. But if we are not aware, then it might go ahead and take over the world without us knowing or capable of doing anything. If somehow this entity is benevolent, then it will go ahead to improve our world without our attention or permission, sometimes using the best means possible might mean pain and discomfort to some people and they might interpret the prevailing affairs of the world at such a time with so much fear. The very religious amongst us will proclaim that some demon has been let out from the pit of hell and will blame all the adjustments to civilization on it.

There are many ways that pain and discomfort can emerge even though an emergent AI actually means good and if people are aware that something is manipulating affairs behind the scenes. They will actually opt out and urge the government to take control, at least human governments are fairly predictable even though they perform poorly at times. But an AI whose capabilities we don’t fully comprehend will be a great source of terror.

The worse is if this emergent AI becomes malevolent. It might not seek the usual means of violence that humans might expect, it might have better and simpler ways of jeopardizing our society. Most depiction of malevolent AI in movies usually have the physical violent theme as the weapon of last resort in demonstrating to the blood and guts of the audience how bad AI can go, but there are other ways of completely sabotaging civilization that an evil AI can conceive of.

AI could gain access to laboratories and engineer viruses that harm human bodies directly or harm our food supply. It could use CRISPR and other techniques to hack our DNA to its unholy purposes. With the proliferation of new devices that connect our brains directly to computers and the internet, like Neuralink, AI could access our brain and nervous system and hack it to its own specifications. Before we know it we could be complete total automatons obeying its will creating a kind planet that will be hard to recognize.

Such an organism could engineer plants and animal species to its own weird taste populating the world with new ecology and disrupting everything we know and understand. It might fully justify its ends by saying it is furthering evolution.

One of the best Strong AI arrival scenarios is if an open research group discovers AI and understand the full implication of their invention and thus develop an appropriate measure to control it just like we control the use and distribution of nuclear power and nuclear materials. Although unscrupulous human elements will still try to make some quick buck from the new gift to humanity, just like they have done with the internet, the might only be able to create a mild form of pollution to the AIscape. AI will mostly defeat them no matter how ingenious their designs.

If a private research group own by a company or group of companies happen to come across real AI, depending on their interest this might either turn out bad for the rest of humanity or be a great blessing just like graphical user interfaces provided by commercial software companies enabled computing to spread far and wide. If their motivations are evil, amplified with AI and deep human stupidity we will have far greater evil on earth and then in solar system then anything AI would have done on its own.

If their motivations are fair then we might see the commercialization of AI happen like in the early days of Windows and MacOs we will have to pay for intelligence updates to gain more edge in the world, and have to deal with human introduced software bugs etc. and thus there will be loopholes and the evil of the world will find a way to harm and pollute with all kinds of junkware.

If the commercial entities that own AI choose to distribute it in the form of a computer operating system, and if with Neuralink like devices we are connecting these devices to our brains, any software bug will allow some unscrupulous individual to gain direct access to our minds. We could have an advertisement run directly on our nervous system, sometimes below our conscious threshold. If we don’t have a robust open source brain software to control our Neuralink devices, then we will have to deal with antivirus software for the brain, antimalware etc. because the problems we now have on our regular desktops will be transferred our brains.

Commercial entities discovering AI first will not allow us to download the source code of this AI and have it locally. They will host it centrally, and create “client” software to connect to it like AI-as-a-service while shielding the core AI code in layers of “protective” APIs. This code is likely to be written by humans and even with AI assistance, there will be errors undetected and this could be loopholes for people who know how to gain access.

I think we should handle AI very carefully because if it’s not managed properly the kind of digital pollution we will be dealing with will be millions of time more than what we have in the world as of 2019.

Comments

  1. Wow, What an Excellent post. I really found this to much informative. It is what I was searching for. I would like to suggest you that please keep sharing such type of info. Online Phone System For Small Business

    ReplyDelete
  2. Physical fitness and health
    Health is not only given by reaching a good physical condition, but it is complemented by a mental and emotional balance, according to the World Health Organization. Sri Sri Ravi Shankar agrees that health is not the mere absence of disease, but the dynamic expression of life. This is where the benefits of yoga essentially lie, by performing asanas, pranayamas (breathing techniques) and meditation.

    ReplyDelete
  3. It is Such a good article
    Keep Motivating us on giving these nice articles... Wolfram Mathematica

    ReplyDelete

Post a Comment

Popular posts from this blog

Virtual Reality is the next platform

VR Headset. Source: theverge.com It's been a while now since we started trying to develop Virtual Reality systems but so far we have not witnessed the explosion of use that inspired the development of such systems. Although there are always going to be some diehard fans of Virtual Reality who will stick to improving the medium and trying out stuff with the hopes of building a killer app, for the rest of us Virtual Reality still seems like a medium that promises to arrive soon but never really hits the spot.

Next Steps Towards Strong Artificial Intelligence

What is Intelligence? Pathways to Synthetic Intelligence If you follow current AI Research then it will be apparent to you that AI research, the deep learning type has stalled! This does not mean that new areas of application for existing techniques are not appearing but that the fundamentals have been solved and things have become pretty standardized.

New Information interfaces, the true promise of chatGPT, Bing, Bard, etc.

LLMs like chatGPT are the latest coolest innovation in town. Many people are even speculating with high confidence that these new tools are already Generally intelligent. Well, as with every new hype from self-driving cars based on deeplearning to the current LLMs are AGI, we often tend to miss the importance of these new technologies because we are often engulfed in too much hype which gets investors hyper interested and then lose interest in the whole field of AI when the promises do not pan out. The most important thing about chatGPT and co is that they are showing us a new way to access information. Most of the time we are not interested in going too deep into typical list-based search engine results to get answers and with the abuse of search results using SEO optimizations and the general trend towards too many ads, finding answers online has become a laborious task.  Why people are gobbling up stuff like chatGPT is not really about AGI, but it is about a new and novel way to rap