Artificial intelligence needs to build an ethical system in the face of "security crisis"
IT Times reporter Pan Shaoying
Do people still have privacy to protect in the AI era? Will AI control the world in the future? As AI has set off waves of upsurge, more and more attention has been paid to the insecurity, instability and threat of AI. A series of questions about AI security were thrown out at the high-end security dialogue forum of the World Artificial Intelligence Congress in 2018. In the Shanghai Initiative for the Safety Development of Artificial Intelligence, which was officially released at the forum, some suggestions were put forward to cope with AI security risks and future development. Artificial intelligence is a double-edged sword, it is necessary to use the new "rules of the game" to let the new technology and human civilization form a benign interaction.
Restrictions on the rights of intelligent machines
It is undeniable that AI has greatly liberated social productivity and brought more convenience to life from all angles. But at the same time, because of the immaturity of AI technology and imperfect management mode, AI security incidents occur frequently, and a series of ethical, legal and security problems have arisen. This is an unavoidable problem in the development of AI. On the one hand, human beings are constantly designing more advanced "intelligence" from generation to generation, on the other hand, they are cautiously alert to the "betrayal" of artificial intelligence.
Human norms, morals and AI are a pair of "contradictory bodies". In the view of many experts, in the whole process of AI research and development, safety and ethical issues need to be designed first, human norms and moral values should be embedded in AI system, and the adverse effects of AI can be eliminated to the greatest extent so as to create a safe alternative for human beings. The future of intelligence.
Einstein and Leonardo Da Vinci are all representatives of high IQ. With intelligent robots, the IQ value will far exceed that of human beings. It is predicted that with the continuous improvement of computer intelligence, the IQ of artificial intelligence will reach 10 000 in 20 years, and will catch up with human beings in quantity.
In the future, if artificial intelligence has similar perceptions to human beings, should it have the same rights as human beings? Baidu CEO Robin Li said at the World Conference on artificial intelligence that AI ethics must adhere to four principles: the highest principle of AI is safety and controllability; AI's vision is to promote more equal access to technology and ability; the value of AI lies in teaching people to learn and grow, rather than transcending and replacing human beings; AI's ultimate ideal is Bring more freedom and possibility to mankind.
AI may not have human values and emotions, but AI still needs to build its own ethical system. Chen Shi, president of Fengqiao Group, told the IT Times that all countries should attach importance to the consideration of AI safety ethics, put technology development and social ethics in the same environment, reduce the impact of AI ethics, and ensure the healthy development of AI in the world's common ethics. "It is reasonable that human beings give intelligent machines certain rights, and intelligent machines should have the moral rights to be respected. However, while giving intelligent machines some rights, we should limit the rights of intelligent machines to ensure that artificial intelligence machines are friendly to humans.
AI creates "transparent people"
Gao Qiqi, Dean of the Institute of Artificial Intelligence and Big Data Index of East China University of Political Science and Law, once put forward the concept of "transparent person". He said that people in the future are transparent people, which is an irresistible trend. "It's easy to get personal privacy data if interested people specialize in gathering information and traces you leave behind on your Weibo friends, microblogs and other social tools." Gao Qiqi told reporters.
In recent years, reports of data leaks have occurred from time to time, such as the recent leak of user privacy in China Hotels, the theft of 50 million Facebook users'personal information by Cambridge Analytics UK, and so on. However, the application of large data is closely related to artificial intelligence. Therefore, who owns the ownership, right to use, who is responsible for the supervision of large data to ensure its reasonable and legal use, how to protect privacy and information security has become a very difficult problem in the era of artificial intelligence.
Wang Qi, founder of KEEN, told the IT Times that data use is inevitable in the era of artificial intelligence, and information leakage is not unique to the era of artificial intelligence. "But if we do not impose restrictions on the age of AI, the consequences will be much more serious." Wang Qi said that he was very willing to use a variety of smart devices when the camera and other privileges to turn on, but not to give privileges can not be used, the Internet of Things may enable individuals at any time under the monitoring of smart devices.
How to solve this problem? It needs to be resolved from both legal and technical levels. "The law enables manufacturers to pay more attention to safety in the process of R & D and increase the cost of breaking the law. By law, practitioners in the AI industry should strengthen self-discipline and maintain awe. Wang Qi said,
Ji Xinhua, CEO of Shanghai Unique Information Technology Co., said that technical measures were needed to restrain AI, "so that AI can analyze data, but not obtain data, this should be the direction of technical efforts."
In this regard, Wang Qi believes that his company in vulnerability attacks, often need to collect a variety of data, data collection will be desensitized after processing, "but if the use of artificial intelligence'eyes'to examine these desensitized data, is it really safe? AI will never deduce key conclusions from desensitization data? It's still hard to tell. "
Crack down on black products with AI
In the era of artificial intelligence, the situation of network security is more severe and complex, but artificial intelligence technology can also play a unique role in network security vulnerability detection, malicious software identification, bad information intelligent audit, prevention of network crime and so on. It can be said that AI is also enabling network security.
The basic principle of the new network security situational awareness is to use data fusion, data mining, intelligent analysis and visualization technology to display the network security status in real-time and intuitively, and predict the network security trend, providing a reference for network security threat warning and protection. The first step is to extract the data of security devices (such as firewalls), network devices (such as routers, switches), services and applications (databases, applications), standardize and revise the data, and annotate the basic characteristics of events, etc. The second step is to pre-process the data acquired in the sensor acquisition environment, such as denoising and impurity filtering; the third step is to fuse data from different sources; the fourth step is to select artificial intelligence algorithm for situation recognition, situation understanding and situation prediction; the last step is to complete the correlation analysis and situation analysis, resulting in the situation assessment results. The analysis report and integrated network situation map are presented to assist decision-making. Mao Shengbin, senior director of Tencent safety management department, said.
Taking Tencent's Guardian Project as an example, it introduces a multi-dimensional dynamic verification mechanism based on AI artificial intelligence and neural network analysis capabilities to combat related black products. Last year, the Guardian Program security team used the technology to help police crack down on two of the country's largest black-collar gangs using artificial intelligence neural networks to crack identification verification codes.
"Artificial intelligence should be equipped with brakes".
As He Jifeng, an academician of the Chinese Academy of Sciences, said, technology is neutral. "Artificial intelligence can predict and crack malicious code, but it can also be a vulnerability of development tools. In addition, artificial intelligence algorithms are uncertain. Usually we do not know how artificial intelligence works and how to get results."
Although artificial intelligence in various fields such as voice interpretation, press releases, doctor assistance, victory over the world champion of Go and so on make the human eye-catching, but in his view, if the artificial intelligence is divided into weak artificial intelligence, general artificial intelligence and strong artificial intelligence, at present reluctantly can be regarded as general artificial intelligence. Yes.
According to the White Paper on Artificial Intelligence Security issued by the China Academy of Information and Communication, most countries are at the initial stage of construction or research on the policies, regulations and standards related to artificial intelligence.
Niu Jinhang, senior engineer of the Institute of Safety, China Information and Communication Research Institute, said that the United States and Germany, for example, regard safety as the first criterion of automatic driving, but have not formed a unified safety evaluation standard system and method. At the same time, most governments rely on restricting enterprises to carry out testing and verification work, and lack independent third-party evaluation and certification agencies. In addition to solving the basic security problems in the use of artificial intelligence, the responsibilities of users and service providers should be clearly defined.
In July last year, the State Council issued a "New Generation of Artificial Intelligence Development Plan" put forward that by 2020, China's core industry of artificial intelligence will exceed 150 billion yuan, driving the scale of related industries to exceed 1 trillion yuan. In addition, the "unicorn" of AI has sprung up.
Artificial intelligence can provide new means and new ways to protect national network security, but in the process of landing applications, because of the uncertainty of technology, there will be many challenges. According to Lu Chuanying, an associate researcher at the Institute of Global Governance of the Shanghai Institute of International Studies, artificial intelligence should be equipped with brakes to make the industry faster and more stable.
Waonews is a news media from China, with hundreds of translations, rolling updates China News, hoping to get the likes of foreign netizens