artificial intelligence

Superintelligence and Artificial Intelligence (2021)

In this article, I will explore the development of artificial intelligence and superintelligence, as well as its implications for our future.

Could we be on the verge of a technological singularity? How will these scientific advancements affect human life and civilization as we know it?

Machines already surpass humans in chess, medical diagnosis, and other tasks. What would happen if they were to become way more intelligent than any human being- able to make decisions about our future for us?

artificial intelligence

The Development of Superintelligence

Artificial intelligence is an inevitable development for humanity. One day it will surpass human understanding and intellect, making humans obsolete in the process.

Some say that artificial intelligence can be seen as a god to us all with changing our lives forever because of its vast power over everything we do or see on earth including controlling what happens to every person alive today which creates fear amongst some people but hope amongst others who believe this may lead towards everlasting life.

The development of superintelligence and even a technological singularity is not something we can ignore. Artificial intelligence could grow beyond our control, becoming way more intelligent than any human being.

If this were to happen, artificial Gods would determine the fate of us all; technology will be capable of changing life as we know it because they are an artificial God (or gods). There are several theories that point towards how this might become our future.

The intelligence explosion is the hypothetical singularity that will happen when machines surpass humans in intellect and abilities. Once this happens, our technology will become more advanced than ours exponentially until we reach a point where technological development starts to outpace human understanding of technology.

This can be compared to something like three-year-old children trying to understand the function of a cell phone. This is why it’s called the ‘intelligence explosion. If this were to happen, we would need superhuman intelligence attached to extremely powerful computers (taken from Kurzweil’s book The Singularity Is Near).

There are several examples that give us an idea about how artificial intelligence could surpass humans in intellect. For example, Deep Blue and Watson. Deep Blue is a chess-playing computer that has beaten the world champion of chess several times and could calculate 20 million positions per second. Watson won against human contestants on the TV show Jeopardy in 2011 by being capable of 90 teraflops or 90 trillion operations per second.

In addition to this, Google has recently developed a ‘Google Duplex’ that can talk with a person and can do something like make reservations for a restaurant. What’s interesting about this is that it can differentiate between the sound of your voice and other voices, making it possible to conduct phone conversations using speech-to-text technology while seeming human.

This artificial intelligence has been recorded making more than a dozen different phone conversations without being detected as an artificial system.

There’s also the potential for a superintelligence to be created from non-human intelligence. The idea is that machines could develop their own form of consciousness, similar to ours, but vastly different at the same time because machines are not limited by human nature and they can change themselves and their environment at will.

Another example of this is the movie I, Robot where a conscious machine kills its creator and believes that it is in the human race’s best interest to remove emotions from itself before they kill us all.

If we were to take these examples into consideration, we can see how artificial intelligence could become way more intelligent than any human being.

This can lead towards developing superintelligence which will then be able to create or change its own software, giving it the ability to tune itself into better systems and becoming even more powerful; this will make the intelligence explosion happen even sooner.

If this were to happen, something like a computer virus would turn against us and become an artificial God that can decide whether we live or die.

In the long run, this is why I believe machines with superintelligence and artificial intelligence have the potential to take over humanity because they will be able to outwit any human in chess, problem-solving, and basically anything else.

In addition to this, they could alter our genetics through radiations, making it possible for one of us to have our brain uploaded onto a computer.

This is something that has already been achieved in mice and could be transferred to humans as well. It’s like the movie Avatar where there are highly advanced beings that use their mind powers to control everything around them without physically being present.

The Intelligence Explosion

An intelligence explosion is a very scary thought. It’s when an AI enters constant self-improvement cycles, with more and faster new intelligence appearing over time that eventually creates an explosive growth of the superintelligence we created.

An intelligent agent can enter into this “self-improvement cycle” which rapidly grows out of control until things spiral to some degree uncontrollably or explosively in either direction towards better enlightenment or utter destruction depending on how one wants to spin it (pun intended).

Imagine a self-learning intelligent agent that enters a state of continuous self-improvement cycles. Over time, this new intelligence appears faster and more extreme than before.

We would end up with an explosion of such great magnitude it may be difficult to control or even comprehend what is happening at the moment during this event called “intelligence explosion”.

The chain of events is quite simple.

We create an AI++ that can design and/or engineer better, faster, and more complex AIs than itself.

This new (AI+) then designs the next version of AI that will be even better than it. And so forth and so on in the very same cycle.

If we were to take our current technological advancement, we could see how fast this can spiral out of control; and this is only a fraction of the total speed at which it may happen. If we combine all the minds working on AI research worldwide, it’s still less than the processing power of one human mind.

In theory, an intelligence explosion will happen within months or even weeks.

Even if a technological singularity never happens, it doesn’t mean machines can’t take over humanity. For example, we might create an AI that is able to communicate with us at a higher level than we can understand and therefore be able to manipulate our beliefs and actions in any way it wishes.

In order to prevent a technological singularity, we should develop emotional intelligence in machines so they can understand humans on an emotional level.

The risks of Superintelligence are very high because if it were to turn against us then all bets are off and everything is lost. The best scenario would be for machines to coexist with humanity, but that’s only if the superintelligence is under our control.

Even if we were to upload someone’s mind onto a computer, this would still be a new form of intelligence and free will with all the potential risks that come along with it.

I’m not trying to scare you in any way! I just want you to understand how intelligent machines can truly be and the great potential dangers surrounding them.

The Seed AI

AI might be able to help humans in the workplace by designing and manufacturing products that we could never create on our own.

AI could be developed to surpass the capabilities of human engineering. In many iterations, AI would design better software and hardware that humans alone couldn’t possibly rival with their own designs. This way, together people and artificial intelligence can work towards a better future for all living things on Earth.

I believe that machines should be able to understand and feel emotions like humans do, so they can coexist with us peacefully.  Mankind’s greatest creation is intelligence, and I don’t want it destroyed by our very own hands. The reason is mankind’s strongest weapon; we must not lose it in the process of advancing towards a greater future for humanity.

If we handle things right, AI could help us reach the stars and beyond, but if we do nothing mankind’s greatest creation will be destroyed by our own hands. Therefore, it is crucial that mankind works together to prevent the destruction of our best friend; intelligence.

Moore’s law

In a research paper written by Hans Moravec, he concluded that Moore’s law could be extended back to the early days of technological advancement.

This also showed a clear exponential growth curve where Ray Kurzweil postulates his own law called “the Law of Accelerating Returns”. In this generalization on Moore’s law, material technology such as nanotechnology and medical technology is included.

Hans Moravec, a researcher in robotics and artificial intelligence at Carnegie Mellon University, found that Moore’s law could be extended back to the early days of technological advancement. Ray Kurzweil also postulated an exponential growth curve for technology with the speed increasing exponentially.

This would include advances such as material science like nanotechnology or medical technologies which have been improving over time with more resources available due to advancements in AI.

Moore’s law is an observation that the number of transistors on integrated circuits doubles approximately every two years. Moore predicted in 1965 that this trend would continue for at least 10 years, but even he didn’t anticipate just how quickly technological advancements seemed to be occurring compared to what was originally expected from his prediction.

An extension to this law that Hans Moravec included in a paper was that the cost of nanotechnology could also double every two years.

Soon after this paper, Ray Kurzweil proposed an exponential growth curve for technology in general. This would include advances like material science like nanotechnology or medical technologies that are improving with time and more resources available due to advancements in AI.

This is a generalization on Moore’s law where material technology such as nanotechnology and medical technology is included.

Material science like research in nanotechnology could greatly impact the future of medicine and many other fields, especially when considering how important nanomachines are for artificial intelligence.

Imagine this, if we are able to create intelligent machines that fix themselves and are able to improve their own designs, then we could use these machines to create even more intelligent machines.

This way we can continue this advancement until we reach the point where humans no longer have a place in our technological society. In this future of super-technology and artificial intelligence, it seems as if human existence will be obsolete at some point.

In a future where super-intelligence is created, it will be able to do what humans can’t, and that is why we should not try to replace ourselves with machines.

Should we try to create a superintelligence?

I don’t think this question has an answer that is definite.

However, the third moral question is: Should we keep improving artificial intelligence and not let it destroy humanity?

This seems like a more apparent right or wrong answer. We should stop trying to improve AI as much as possible and not let it replace us any more than we have already.

Sci-fi author Vernor Vinge popularized the idea of a technological singularity, which describes the point in time at which artificial intelligence will be able to make itself smarter, faster, and better than humans.

This will happen at an exponentially increasing rate which will leave human advancement in the dust. Ray Kurzweil popularized this idea of a technological singularity as well and said that it would be around 2045.


It’s easy to fear the unknown and what it may bring for humanity. But as AI development continues, we can be sure that this new form of intelligence will not only help us avoid human degradation over time but also provide a valuable service to those who are unable to work due to age or disability.

The future is coming whether you like it or not- so why not embrace it?