Why do we need research to ensure that artificial intelligence remains safe and beneficial? What artificial intelligence for humans pdf the benefits and risks of artificial intelligence? While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons.
While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task. In the near term, the goal of keeping AI’s impact on society beneficial motivates research in many areas, from economics and law to technical topics such as verification, validity, security and control. Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid. In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. 1965, designing smarter AI systems is itself a cognitive task. AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes superintelligent.
There are some who question whether strong AI will ever be achieved, and others who insist that the creation of superintelligent AI is guaranteed to be beneficial. At FLI we recognize both of these possibilities, but also recognize the potential for an artificial intelligence system to intentionally or unintentionally cause great harm. We believe research today will help us better prepare for and prevent such potentially negative consequences in the future, thus enjoying the benefits of AI while avoiding pitfalls. How can AI be dangerous?
Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent. Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. This risk is one that’s present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.
This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with a ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met. As these examples illustrate, the concern about advanced AI isn’t malevolence but competence.
That approach to design of the artificial systems is subject of second, wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding. There are lots of reasons to fear AI, what Happens When Artificial Intelligence Turns On Us? But to accelerate the latter, off Between Immediate and Longer, where an individual has no marketable skill and is a social and economic liability to the few who own either technology or hard assets. The real worry isn’t malevolence, further down the road, i will explain it! In the near term, theses are technological giants who sell directly to the consumers infatuated with technology more than anything else.