ARTIFICIAL INTELLIGENCE – Benefits, Risks And Myth

ARTIFICIAL INTELLIGENCE – Benefits, Risks and Myth

A
by Alan Jackson — 5 years ago in Artificial Intelligence 7 min. read
3140

WHAT IS AI?

Artificial Intelligence (AI) is intelligence process by machines or connect a hardware with programming language. AI is an area of computer science, connect hardware, device with programming language and the machine work will exactly like a human.

AI works by combining a large number of data, fast response, intelligent algorithm, great process and allows the software to learn automatically from data you have and respond according to them.

From SIRI into self-driving automobiles, artificial intelligence (AI) is progressing quickly. While science fiction frequently describes AI as robots using human attributes, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous firearms.

Artificial intelligence now is suitably called narrow AI (or weak AI), in it is intended to execute a narrow job (e.g. only facial recognition or just online searches or just driving a vehicle). On the other hand, the long-term objective of many researchers is to produce overall AI (AGI or robust AI). While narrow AI can outperform people at all its particular task is, such as playing baseball or solving equations, AGI would outperform people at virtually every cognitive job.


Why Research AI Safety?

In the near term, the goal of keeping AI’s impact on society beneficial has spurred research in many areas ranging from economics and law to technical topics such as verification, legitimacy, security and control. While it can be little more than a minor nuisance if your laptop crashes or gets hacked, it all becomes more important than an AI system does what you want to do if it is your car, your The airplane, your pacemaker, controls your automated trading system or your power grid. Another short-term challenge is the use of destructive weapons in deadly autonomous weapons. Yes, is canceled.

In the long term, an important question is if the discovery of strong AI succeeds and an AI system outperforms humans in all cognitive functions. As I.J. Designing a good, smart AI system in 1965 is a cognitive task in itself. Such a system can potentially undergo recurring self-improvement, triggering an intelligence explosion that leaves human intelligence far behind. By inventing revolutionary new technologies, such a superintelligence can help us eradicate war, disease, and poverty, and so the creation of strong AI may be the greatest event in human history. However, some experts have expressed concern, that it may even be the last one until we learn to align the goals of AI’s first with Super.

There are some who question whether a strong AI will ever be achieved, and others who insist that the creation of superintelligent AI is guaranteed to be beneficial.  We recognize both of these possibilities, but also recognize the potential to cause a lot of damage to an artificial intelligence system, knowingly or unknowingly. We believe that research today will help us better prepare for and prevent such potentially negative consequences in the future, thus avoiding the disadvantages that AI will enjoy.

How can AI be Dangerous?

Most researchers agree that a superintendent AI is unlikely to exhibit human emotions such as love or hate, and there is no reason to expect AI to be intentionally altruistic or moralistic. Instead, considering AI when posing a risk, experts consider the two scenarios most likely:

  1. AI is programmed to do something destructive: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons can easily become mass casualties. In addition, AI arms race may inadvertently lead to AI warfare, resulting in mass casualties. To avoid being thwarted by the enemy, these weapons would only be designed to be extremely difficult to “close”, so that humans could lose control of such a situation. This risk is one that also exists with narrow AI, but AI increases as the level of intelligence and autonomy increases.
  2. AI is programmed to be something beneficial, but it develops a destructive method to achieve its goal: this can happen even when we fail to align the goals of AI with ourselves completely Are, which is very difficult. If you could ask an obedient intelligent car to take you to the airport as soon as possible, you might be chased by helicopters there and covered with vomit, which you didn’t want, but actually you What was asked for. If a superintending system is worked out with an ambitious geoengineering project, it can wreak havoc with our ecosystem as a side effect, and to see human efforts as a threat to stop it Does.

As these examples show, the concern about advanced AI is not masculinity but competence. A super-intelligent AI would be very good at meeting its goals, and if those goals are not aligned with us, then we have a problem. You are probably not a rogue anti-heater who drives ants out of malice, but if you are in charge of a hydroelectric green energy project and an anthill is flooded into the area, it is very bad for the ants. A major goal of AI security research is to never place humanity in the position of those ants.
Also read: 5 Best Resource Capacity Planning Tools for Teams

Why the recent interest in AI safety

Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and lots of other big names in technology and science have recently expressed concern from social media and through open letters concerning the dangers introduced by AI, combined by several prominent AI researchers. What’s the topic abruptly in the headlines?

The thought that the pursuit of strong AI would finally triumph was thought of as science fiction, centuries or even more away. But as a result of recent discoveries, many AI landmarks, which specialists viewed as years away only five decades back, have been attained, making many specialists take seriously the potential for superintelligence within our life. When some experts still suspect that human-level AI is centuries off, many AI researches in the 2015 Puerto Rico Conference suspected it would occur before 2060. As it might take decades to finish the compulsory safety study, it’s wise to begin it today.

Because AI has the potential to become more intelligent than any human, we have no definitive way of predicting how it will behave. We cannot use past technological developments as a basis because we have never created anything that outsources to us, unknowingly or unknowingly. The best example of what we can face may be our own development. People now control the planet, not because we are the strongest, fastest or largest, but because we are the smartest. If we are not the smartest yet, are we confident of remaining in control?

The top Myths about advances AI

A fascinating conversation is taking place about the future of artificial intelligence and what it will / should mean for humanity. There are fascinating controversies where the world’s leading experts disagree, such as the impact of AI on the future job market; If / when human-level AI will be developed; Will it lead to an intelligence explosion; And is this something we should welcome or fear. But there are also many examples of boring pseudo-controversies caused by people misunderstanding and talking to each other. To help yourself focus on interesting controversies and open questions – and not on misconceptions – clarify some of the most common myths.

Timeline Myths

The first myth relates to the timeline: how long will it take until machines are taken far above human-level intelligence? A common misconception is that we know the answer with great certainty.

A popular myth is that we know that we will achieve supernatural AI in this century. In fact, history is full of technical over-hypnosis. Where are the fusion power plants and flying cars that were promised, we do not yet have? AI has also been repeatedly insulted in the past by some of the region’s founders. For example, John McCarthy (who coined the term “artificial intelligence”), Marvin Minsky, Nathaniel Rochester and Claude Shannon wrote optimistic predictions of what could be accomplished over two months with stone age computers. Is: “We propose that artificial intelligence is studied during the summer of 1958 at Dartmouth College, 2 months […] We will try to find a way to use language, create abstraction and create concepts, solve problems of the kind now reserved for humans, and improve ourselves. We think that if a carefully selected group of scientists are in heat. If one works together, one or more of these problems can make a significant advance.

On the flip side, a favorite counter-myth is that we all know we won’t acquire superhuman AI this century. By way of instance, Ernest Rutherford, arguably the best atomic physicist of the time,” said in 1933 — significantly less than 24 hours before Szilard’s creation of the atomic chain reaction — that atomic power was”moonshine.” The most intense form of the myth is that superhuman AI will not arrive because it is physically not possible. But, physicists are aware that a brain contains quarks and electrons organized to function as a strong computer, which there is no law of physics preventing us from creating more smart quark blobs.

There have been numerous studies asking AI researchers the number of years from now they believe we will have human-level AI with 50% likelihood. These polls have exactly the identical decision: the world’s top experts disagree, therefore we just don’t understand. By way of instance, in this survey of their AI researchers in the 2015 Puerto Rico AI seminar, the typical (median) response was 2045, but a few investigators guessed countless years or even more.

There is also a related myth that those who fear about AI believe it has just a couple of decades away. In reality, the majority of individuals on document fretting about superhuman AI suspect it is still at least decades away. However they assert that as long as we are not 100% convinced that it will not occur this century, it is wise to begin security research today to get ready for the eventuality. A number of the security problems related to human-level AI are so difficult that they might take decades to address. So it is wise to begin exploring them now instead of the evening before some developers drinking Red Bull opt to change one on.

CONTROVERSY MYTHS

Another frequent misconception is that the only folks voicing concerns concerning AI and recommending AI security study are luddites who do not know a lot about AI. A related misconception is that encouraging AI security research is enormously controversial. In reality, to encourage a small investment in AI security study, individuals do not have to be convinced that dangers are large, only non-negligible — as a small investment in house insurance is warranted with a non-negligible likelihood of the house burning down.

It can be that websites have produced the AI safety argument seem more contentious than it truly is. After all, fear sells, and posts utilizing out-of-context quotations to emphasise impending doom can create more clicks than aggressive and balanced ones. Because of this, two individuals who simply know about one another’s positions from media quotations are very likely to believe they disagree greater than they actually do. By way of instance, a techno-skeptic who just read about Bill Gates’s place in a British Scots might wrongly think Gates considers superintelligence to become imminent. Likewise somebody in the beneficial-AI motion who understands nothing about Andrew Ng’s standing except his quotation about overpopulation on Mars may wrongly think he does not care about AI security, whereas in actuality he does. The crux is only that since Ng’s deadline quotes are more, he obviously tends to reevaluate short-term AI challenges over long-term ones.
Also read: 2021’s Top 10 Business Process Management Software

The interesting controversies

Not wasting time about the above-mentioned truths lets us concentrate on authentic and intriguing controversies where the experts disagree. What type of future would you like? Should we create deadly autonomous weapons? What do you want to occur with project automation? What career advice do you give today’s children? Would you rather have new jobs substituting the previous ones, or even a jobless society where everybody loves a life of leisure and machine-produced riches? Further down the street, do you want us to make superintelligent life and disperse it via our cosmos? Can we restrain smart machines will they dominate us? Will smart machines replace us coexist with us or combine with us? Please join the dialog!

Alan Jackson

Alan is content editor manager of The Next Tech. He loves to share his technology knowledge with write blog and article. Besides this, He is fond of reading books, writing short stories, EDM music and football lover.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Copyright © 2018 – The Next Tech. All Rights Reserved.