Artificial intelligence can become horrific for this world
The world is going to change Artificial intelligence occupies many things. Drone could be turned into missiles, fake video may be confused with public opinion, maybe hacking. All this can be done with the help of artificial intelligence, if it goes wrong place or in the hands of bad people. A drone or a robot can be attacked by a specific person or area, terrorist attacks, and can be thrown by unprofitable cars.
Artificial intelligence or artificial intelligence In short, a report on Ai reveals that international security experts They warned that if this technology is a defective state, who do not care for international law, or if it goes to criminals with criminals, then it can be a big misuse.
For this reason, when this artificial intelligence is being created, its inventors should also create things that do not misuse them, but they can be tackled - researchers say that.
To this end, a team of 26 researchers is also pushing for some legislation. They say: Policy makers and technicians and researchers have to work together to get ideas about the misuse of artificial intelligence technology and to deal with it. Artificial intelligence that is not just good application, can also be applied to bad work, it must be realized.
Computer security issues, where there are good and bad aspects, have to learn from it. All parties need to be active together to stop and prevent artificial intelligence abuse.
Shahar Ain of Cambridge University in the UK said that the risks of the artificial intelligence technology currently in the report or the other technologies that may come in the market over the next five years have been highlighted. It did not focus on the artificial intelligence of far future. The issue that has been expressed in the report, is that when artificial intelligence is being taken to a superhuman level and there is no direction.
Some examples of how artificial intelligence can become horrific in the near future, he has highlighted - technology like Alfago - Google Deepmind has developed this artificial intelligence technology. It is so clever that he can overcome human intelligence too. The data is likely to be stolen if hackers fall into the hands of this hacker. A bitter person can train drones to help people recognize their faces, and then identify a person and attack him.
False video can be misused by it. Hackers can duplicate others' throat through speech synthesis.
Miles Brandes of the Future of Humanity Institute at Oxford University said that artificial intelligence will change the risk of human, organization and state insecurity. There are all kinds of risks related to security.
AI reports on how the world's appearance could be seen in the next 10 years. The authors of this research report say, artificial intelligence changes everything. Artificial intelligence abuse can become more risky day by day. In the report of 100 pages, three areas have been identified - digital, physical and political - they may be misappropriated artificial intelligence.
Congratulations! This post has been upvoted from the communal account, @minnowsupport, by rimon512 from the Minnow Support Project. It's a witness project run by aggroed, ausbitbank, teamsteem, theprophet0, someguy123, neoxian, followbtcnews, and netuoso. The goal is to help Steemit grow by supporting Minnows. Please find us at the Peace, Abundance, and Liberty Network (PALnet) Discord Channel. It's a completely public and open space to all members of the Steemit community who voluntarily choose to be there.
If you would like to delegate to the Minnow Support Project you can do so by clicking on the following links: 50SP, 100SP, 250SP, 500SP, 1000SP, 5000SP.
Be sure to leave at least 50SP undelegated on your account.