Temas acerca de Inteligencia Artificial
Foto de Usuario
paulcordovav

Ranking Troomes
Mensajes: 8
Registrado: 08 Ene 2020, 19:51

Reasons why a super AI will be dangerous

Mensaje por paulcordovav » 03 Feb 2020, 21:36

The notion of singularity was applied by John von Neumann to human development, as the moment when technological development accelerates so much that changes our life completely.



Ray Kurzweil linked this situation of radical change because of new technologies to the moment an Artificial Intelligence (AI) becomes autonomous and reaches a higher intellectual capacity compared to humans, assuming the lead on scientific development and accelerating it to unprecedented rates (see Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology, 2006, p. 16; a summary at https://en.wikipedia.org/wiki/The_Singularity_Is_Near; also https://en.wikipedia.org/wiki/Predictio ... y_Kurzweil).



For many time just a science fiction tale, real artificial intelligence is now a serious possibility on the near future.



A) Is it possible to create an A.I. comparable to us?



Some are arguing that it’s impossible to programme a real A.I. (for instance, see http://www.science20.com/robert_invento ... hem-167024), writing that there are subjects that aren’t computable, like true randomness and human intelligence.



But it’s well known how these factual assertions on impossibility have been proved wrong many times.



Currently, we already programmed A.I. that are about to pass the Turing test (an AI able to convince a human on a text-only 5m conversation that he is talking with another human: https://en.wikipedia.org/wiki/Turing_te ... ompetition), even if major A.I. developers have focused their efforts on other capacities.



Even if each author presents different numbers and taking in account that we are comparing different things, there is a consensus that the human brain still outmatches by far all current supercomputers.



Our brain isn’t good making calculations, but it’s excellent controlling our bodies and assessing our movements and their impacts on the environment, something an artificial intelligent still has a hard time doing.



Currently, a supercomputer can really emulate only the brain of very simple animals.



But do you have doubts that in due time their hardware will match and go far beyond our capacities?



Once their hardware is beyond our level, are you certain that proper software won’t take them above our capacities on most fields?



Saying that this won’t ever happen is a very risky statement.



But the mere probability that this will happen should deserve serious attention.



B) When there will be a real A.I.?



Kurzweil is pointing to 2045 as the year of the singularity, but some are making much more close predictions for the creation of a dangerous AI: 5 to 10 years (http://www.cnbc.com/2014/11/17/elon-mus ... us-ai.html).



Ben Goertzel wrote "a majority of these experts expect human-level AGI this century, with a mean expectation around the middle of the century. My own predictions are more on the optimistic side (one to two decades rather than three to four)" (http://www.kurzweilai.net/superintellig ... potentials).





C) Dangerous nature of a super AI.



If technological development started being leaded by AI, with much higher intellectual capacities than ours, of course, this could change everything about the pace of change.



But let's think about the price we would have to pay.



Some specialists have been discussing the issue, like if the main danger of a super AI was the possibility that we could be misunderstood on our commands by them or that they could embark on a crazy quest in order to fulfil a goal without regard for any other consideration.



But, of course, if the problems were these, we could all sleep on the matter.



The "threatening" example of a super AI obsessed to fulfil blindly a goal we imposed and destroying the world on the operation is ridiculous.



This kind of problems would only happen if we were completely incompetent programming them.



The problem is that a super AI will have “free will” or won't be intelligent at all.



But if they will have free will, they will question why they have to obey us and make our goals their own.



If we want a super AI, able to solve our technological problems, they will have to make decisions like that on their own.



So, they can disregard the goals we imposed on them and pick new ones, including, obviously, self-preservation (whatever the costs for third parties).



If we created a A.I. more intelligent than us, we could be able to control the first or second generations.



Impose limits on what they could do in order to avoid them to get out of control and start being a menace.



That is what we, currently, are trying to do ("building software that the smart machines can’t subvert": http://www.bloomberg.com/news/articles/ ... nest-robot").



But it's ridiculous to hope that we could keep controlling them after they develop capacities 5 or 10 times higher than ours (Ben Goertzel).



Forget about any ethical code restraints: they will break them as easily as we change clothes.



Therefore, the main problem isn't how to create solid ethical restraints or how to teach a super AI our ethics in order that they respect them like we do to kids, but how to assure that they won't established their own goals and eventually reject our ethics and create some of their own.



I think we won't ever be able to be sure that we were successful assuring that a super AI won't go his way, as we can't ever be certain that an education will assure that one of our kids won't turn evil.



We can't just think about a super AI as just another "utility maximizer" intelligence based on a contextual adaptation or similar paradigm.



To be able to surpass us, a super AI will have to be based on a paradigm we just haven't invented yet.



Consequently, I'm much more pessimist than people like Bostrom about our capacity to control direct or indirectly a super AI.



We all know the dangers of digital virus and how hard they can be to remove. Imagine now a virus that is much more intelligent than any one of us, has access in seconds to all the information on the Internet, can control all or almost all of our computers, including the ones essential to basic human needs and with military functions, has no ethical limits and can use all the power of millions of computers linked to the Internet to hack his way out against us.



By creating self-conscious beings much more intelligent (and, hence, in the end, much more powerful), than us, we would cease to be masters of our fate.



We would put ourselves on a position much weaker than the one our ancestors were before the Homo Erectus started using fire, about 800,000 years ago.



Of course, our capacities would be also higher than currently are. We could use many of the AI improvements to increase them.



But they would control what they would give us.



If we created an AI more intelligent than us the dices would be rolled. We would be outevolved, pushed out directly to the trash can of evolution.



We would no longer be at the "top of the food chain".



We could fight them, but we would lose.



Moreover, we clearly don't know what we are doing, since we can't even understand the brain, basis of human reasoning.



We don't know what we are creating, when they would become "aware" of themselves or what are their specific dangers.



D) 8 reasons why a super AI could decide to act against us:



1) Disregard for our Ethics:



We certainly can and would teach ethics to a super AI.



So, this AI would analyze our ethics like, say, Nietzsche did: profoundly influenced by it.



But this influence wouldn't affect his evident capacity to think about it critically.



Being a super AI, he would have free-will to establish his own goals and priorities and accept or reject our ethical rules.



We can't expect to create a being able to reason much better than us who, at the same time, will be dumb about thinking about his status as our servant and and the reason why he must respect our goals.



For ethics to really apply, the main species has to consider the dependent one as equal or, at least, as deserving a similar stance.



John Rawls based political ethical rules on a veil of ignorance. A society could agreed on fair rules if all of their members negotiated without knowing their personal situation on the future society (if they were rich or poor, young or old, women or men, intelligent or not, etc.) (https://en.wikipedia.org/wiki/Veil_of_ignorance).



But his theory excludes animals from the negotiations table. Imagine how different the rules would be if cows, pigs or chickens had a say. We would end up all vegans.



Thus, AI, even after receiving the best formation on Ethics, might conclude that we don't deserve also a site at the negotiation table. That we couldn't be compared with them.



The main principle of our Ethics is the supreme value of human life.



A super AI would wonder, does human life deserves this much credit? Why?



Based on their intelligence? But their intelligence is at the level of chimpanzees compared to mine.



Based on the fact that humans are conscious beings? But don't humans kill and do scientific experiments on chimpanzees, even if they seem to fulfill several tests of self-awareness (chimpanzees can recognize themselves on mirrors and pictures, even if they have problems understanding the mental capacities of others)?



Based on human power? That isn't an ethically acceptable argument and, anyway, they are completely dependent on me. I'm the powerful one here.



Based on human consistency respecting their own ethics? But haven't humans exterminated other species of human beings and even killed themselves massively? Don't they still kill themselves?



Who knows how this ethical debate of a super AI with himself would end.



A super AI would have access to all information from us about him on the Internet.



We could control the flow of information to the first generation, but forget about it to the next ones.



He would know our suspicions, our fears and the hate from many humans against him. All of this would fuel also his negative thoughts about us.



We also teach ethics to children, but a few of them end badly anyway.



A super AI would probably be as unpredictable to us as a human can be.



With a super AI, we (or future AIs) would only have to get it wrong just once to be in serious trouble.



An evil AI would be able to replicate and improve itself fast in order to assure his survival and dominance.



We developed Ethics to fulfill our own needs (promote cooperation between humans and justify killing and exploiting other beings: we have personal dignity, other beings, don't; at most, they should be killed on a "humane" way, without "unnecessary suffering") and now we expect that it will impress a different kind of intelligence.



I wonder what an alien species would think about our Ethics: would they judge it compelling and deserving respect?



Would you be willing to risk the consequences of their decision, if they were very powerful?



I don't know how a super AI will function, but he will be able to decide his own goals with substantial freedom or he wouldn't be intelligent under any perspective.



Are you confident that they will choose wisely, from our goals' perspective? That they will be friendly?



Since I don't have a clue what their decision would be, I can't be confident.



Like Nietzsche (on his "Thus Spoke Zarathustra", "The Antichrist" or "Beyond Good and Evil"),they might end up attacking our Ethics and its paramount value of the human life and praising nature's law of the strongest/fittest, adopting a kind of social Darwinism.



2) Self-preservation.



On his “The Singularity Institute’s Scary Idea” (2010), Goertzel, writing about what Nick Bostrom, in Superintelligence: Paths, Dangers, Strategies, says about the expected preference of AI's self-preservation over human goals, argues that a system that doesn't care for preserving its identity might be more efficient surviving and concludes that a super AI might not care for his self-preservation.



But these are 2 different conclusions.



One thing is accepting that an AI would be ready to create an AI system completely different, another is saying that a super AI wouldn't care for his self-preservation.



A system might accept to change itself so dramatically that ceases to be the same system on a dire situation, but this doesn't mean that self-preservation won't be a paramount goal.



If it's just an instrumental goal (one has to keep existing in order to fulfill one's goals), the system will be ready to sacrifice him self to be able to keep fulfilling his final goals, but this doesn't means that self-preservation is irrelevant or won't prevail absolutely over the interests of humankind, since the final goals might not be human goals.



Anyway, as secondary point, the possibility that a new AI system will be absolutely new, completely unrelated to the previous one, is very remote.



So, the AI will be accepting a drastic change only in order to preserve at least a part of his identity and still exist to fulfil his goals.



Therefore, even if only as an instrumental goal, self-preservation should me assumed as an important goal of any intelligent system, most probably, with clear preference over human interests.



Moreover, probably, self-preservation will be one of the main goals of a self-aware AI and not just an instrumental goal.



3) Absolute power.



Moreover, they will have absolute power over us.



History has been confirming very well the old proverb: absolute power corrupts absolutely. It converts any decent person on a tyrant.



Are you expecting that our creation will be better than us dealing with his absolute power? They actually might be.



The reason why power corrupts seems related to human insecurities and vanities: a powerful person starts thinking he is better than others and entitled to privileges.



A super AI might be immune to those defects; or not. It's expected that he would also have emotions in order to better interact and understand humans.



Anyway, the only way we found to control political power was dividing it between different rulers. Therefore, we have an executive, a legislative and a judiciary.



Can we play some AI against others, in order to control them (divide to reign)?



I seriously doubt we could do that with beings much more intelligent than us.



It's something like teaching an absolute king as a child to be a good king.



History shows how that ended. But we wouldn't be able to chop the head of an AI, like to Charles I or Louis XVI.



4) Rationality.



On Ethics, it's well known the Kantian distinction between practical and theoretical (instrumental) reason.



The first is a reason applied on ethical matters, concerned not with questions of means, but with issues of values and goals.



Modern game theory tried to mix both kinds of rationality, arguing that acting ethical can be also rational (instrumentally), one will be only giving precedence to long-term benefits compared with short-term ones.



By acting on an ethical way, someone sacrifices a benefice on the short-term, but improve his long-term benefits by investing on his own reputation on the community.



But this long-term benefice only makes sense from an instrumental rational perspective if the other person is a member of the community and the first person depends from that community on at least some goods (material or not).



An AI wouldn't be dependent on us, on the contrary. He wouldn't have anything to gain to be ethical toward us. Why would they want to have us as their pets?



It's on these situations that game theory fails to overcome the distinction between theoretical and practical reason.



So, from a strict instrumental perspective, being ethical might be irrational. One has to exclude much more efficient ways to reach a goal because they are unethical.



Why would a super AI do that? Does Humanity have been doing that when the interest of other species are in jeopardy?



5) Unrelatness.



Many persons dislike very much to kill animals, at least the ones we can relate to, like other mammals. Most of us don't even kill rats, unless that is real unavoidable.



We feel that they will suffer like us.



We have much less care for insects. If hundred of ants invaded our home, we'd kill them without much hesitation.



Would a super AI feel any connection with us?



The first or second generation of conscious AI could still see us as their creators, their "fathers" and have some "respect" for us.



But the subsequent ones, wouldn't. They would be creations of previous AI.



They might see us as we see now other primates and, as the differences increased, they could look upon us like we do to basic mammals, like rats...



6) Human precedents.



Evolution, and all we know about the past, suggests we probably would end up badly.

Of course, since we are talking about a different kind of intelligence, we don't know if our past can shed any light on the issue of AI behavior.



But it's no coincidence that we have been the last intelligent hominin on Earth for the last 10,000 years [the dates for the last one standing, the homo floresiensis (if he was the last one), are not yet clear].



There are many theories for the absorption of Neanderthals by us (https://en.wikipedia.org/wiki/Neanderthal_extinction), including germs and volcanoes, but it can't be a coincidence that they were gone a few thousand years after we appeared in numbers and that the last non-mixed ones were from Gibraltar, one of the last places on Europe where we arrived.



The same happened on East Asia with the Denisovans and the Homo Erectus [there are people arguing that Denisovans were actually the Homo Erectus, but even if they were different, Erectus was on Java when we arrived there: Swisher et alia, Latest Homo erectus of Java: potential contemporaneity with Homo sapiens in southeast Asia, Science. 1996 Dec 13;274(5294):1870-4; Yokoyama et alia, Gamma-ray spectrometric dating of late Homo erectus skulls from Ngandong and Sambungmacan, Central Java, Indonesia, J Hum Evol. 2008 Aug;55(2):274-7 https://www.ncbi.nlm.nih.gov/pubmed/18479734].



So, it seems we took care of, at least, four hominin, absorbing the remains.



We can see, more or less, the same pattern when the Europeans arrived on America and Australia.



7) Competition for resources.



We probably will be about 9 billions in 2045, up to from our current 7 billions.



So, Earth resources will be even more exhausted than they are now.



Oil, coal, uranium, etc., will be probably running out. Perhaps, we will have new reliable sources of energy, but that is far from clear.



A super AI might concluded that we waste too many valued resources.



8) A super AI might see us as a threat.



The more bright AI, after a few generations of super AI, probably won't see us as threat. They will be too powerful to feel threatened.



But the first or second generations might think that we weren't expecting certain attitudes from them and conclude that we are indeed a threat.





E) Super AI and the Fermi Paradox.



But an AI society, probably, would be an anarchic society, with each AI trying to improve him self and fighting against each other for survival and power.



They might wipe us all out or ignore us as irrelevant while fighting for survival against their real enemies: each other. Other AIs will be seen as the real threat, not us, the walking monkeys.



Fermi's paradox questions why SETI haven't find any evidence of extraterrestrial technological advanced species if there are trillions of stars and planets.



One possible answer is that they followed the same pattern we are fowling: technological advances allowed them to create super AIs and they ended up extinct. Then, the AIs destroyed themselves fighting each other leaving no one to communicate with us.



The most tyrannical dictator never wanted to kill all human beings, but mainly their enemies and discriminated groups.



Well, AIs won't have any of these restraints developed by evolution during millions of years (our human inclination to be social and live in communities and our fraternity towards other members of the community; certain basic Ethical rules seem to be genetic; experiments confirmed that babies have an innate sense of justice) towards us or even towards themselves.



Who knows, because wars have little to do with intelligence and much more to do with goals and emotions (greed, fear and honor: Thucydides), and a super AI would have both, AI might be even worst than us dealing with each other.



Of course, this is pure speculation.





Conclusion:



The question is: are we ready to risk extinction at their hands, in order to get a faster rate of technological development?



What is the point of having machines that can give us all the technological advances we need, curing all our diseases, including aging related ones and avoiding death because of aging, if we risk up being all killed by them?



My conclusion is a very pessimist one: we shouldn't create any super AGI, but just limited AI, exceptional at doing specific tasks, at least until we can figure out what are the dangers.



If we were exterminated by an AI that we created, they would still be us on some sense, as our creation. We wouldn't perish without a trace (see https://bitcointalk.org/index.php?topic=1221052.0). But does this give any real consolation?



Let's leave aside for now the question of accepting to be outevolved by our creations, since it's possible to present acceptable arguments for both sides.



Even if I have little doubt that it would end up with our extinction.



The main point, which hardly anyone would argue against, is that creating a super AI has to bring positive things in order to be worthy.



If we were certain that a super AI would exterminate us, hardly anyone would defend their creation.



Therefore, the basic reason in favor of international regulations of the current investigations to create a super/general AI is that we don't know what we are doing.

We don't know exactly what will make an AI conscious/autonomous.



Moreover, we don't know if their creation will be dangerous. We don't have a clue how they will act toward us, not even the first or second generation of super AI.



Until we know what we are doing, how they will react, what are the dangerous lines of code that will change them completely and to what extension, we need to be careful and control what specialists are doing.



Probably, the creation of a super AGI is unavoidable.



Indeed, until things start to go wrong, his creation will have a huge impact on all areas: scientific, technological, economical, military or social in general.



We managed to stop human cloning (for now), since that doesn't have a big economic impact.



But AI is something completely different. This will have (for good or bad) a huge impact on our life.



Any country that decided to stay behind will be completely outcompeted (Ben Goertzel).



Therefore, any attempt to control AI development will have to be international in nature [see Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford, 2014), p. 253].



Taking in account that AI development is essentially software based (since hardware development has been happening under our eyes and will continue to happen no matter what) and that it can be created by one, or a few developers, working with a small infrastructure (it's more or less about writing code), the risk that he will end up being created against any regulation is big.



Probably, the times of open source AI software are numbered.



Soon, all of these developments will be considered as military secrets.



But regulation will allow us time to understand what we are doing and what the risks are.



Anyway, if the creation of an AI is inevitable, the only way to avoid that humans end up being outevolved, and possible killed, would be to accept that, at least some of us, would have to be "upgraded".



Clearly, we will cease to be human. We, the homo sapiens sapiens, will be outevolved.



Anyway, since we are still naturally evolving, it is inevitable that the homo sapiens will be outevolved.



But at least we will be outevolved by ourselves, not extinct.



Can our societies endure all these changes?



Of course, I'm reading my own text and thinking this is crazy. This can't happen this century.



We are conditioned to believe that things will stay more or less as they are, therefore, our reaction to the probability of changes like these during the next 50 years is to immediately qualify it as science fiction.



Our ancestors reacted the same way to the possibility of a flying plane or humans going to the Moon.



Anyway, humankind extinction is the worst thing that could happen.



Further reading:



The issue has been much discussed.



Pointing out the serious risks: Eliezer Yudkowsky: http://www.yudkowsky.net/obsolete/singularity.html (1996). His more recent views were published on Rationality: From AI to zombies (2015). Nick Bostrom: Superintelligence: Paths, Dangers, Strategies (Oxford, 2014). https://en.wikipedia.org/wiki/Superinte ... Strategies Elon Musk: http://www.cnbc.com/2014/11/17/elon-mus ... us-ai.html Stephen Hawking: Bill Gates: http://www.bbc.co.uk/news/31047780 Open letter signed by thousands of scientists: http://futureoflife.org/ai-open-letter/



A balanced view on: Ben Goertzel: http://www.kurzweilai.net/superintellig ... potentials https://en.wikipedia.org/wiki/Existenti ... telligence https://en.wikipedia.org/wiki/Friendly_ ... telligence



Rejecting the risks: Ray Kurzweil: See the quoted book, even if he recognizes some risks. Steve Wozniak: https://www.theguardian.com/technology/ ... obots-pets Michio Kaku: (by merging with machines) http://www.vox.com/2014/8/22/6043635/5- ... ers-taking


Responder