CEO of Microsoft Allen Institute said: artificial intelligence will not pose a threat to human survival

Elon Musk is planning to develop a fully-automobile car. Such cars require powerful artificial intelligence technology to read and react in real time to the complex driving environment around them. Artificial intelligence is making amazing results. For example, last week, AlphaGo's development team reported that their software is already available to understand how to find the way in the intricate London Underground like the London native. Even the White House of the United States has begun to pay attention to artificial intelligence. A few days ago, the White House released a research report to promote the future of artificial intelligence in the United States.

CEO of Microsoft Allen Institute said: artificial intelligence will not pose a threat to human survival

However, computer scientist and CEO of Allen Institute Oren Etzioni said that we still have a long way to go before people can hand over the world to computers or worry about computer threats to humans. go. In the past few years, Echioni has focused on the research and solving the basic problems of artificial intelligence. He is currently the CEO of Allen Institute of Artificial Intelligence.

In 2014, Microsoft co-founder Paul Allen co-founded the Institute (AI2), focusing on the possible benefits of artificial intelligence. The Institute does not believe that artificial intelligence will pose a threat to human survival, as stated in the Hollywood blockbuster.

CEO of Microsoft Allen Institute said: artificial intelligence will not pose a threat to human survival

Microsoft Allen Paul Allen

The Allen Institute project may not be that interesting. These include SemanTIc Scholar, a search engine based on artificial intelligence for academic research. However, they will also focus on other areas of artificial intelligence such as reasoning. In the words of Echioni, this will make artificial intelligence technology no longer confined to "narrow areas, only to do something."

At the recent AI conference in New York, Scientific American interviewed Echioni. In the interview, he expressed concern about the company's excessive exaggeration of artificial intelligence, especially the current ability of deep learning technology. Deep learning techniques use neural networks to simulate the human brain's nervous system to process large-scale data sets, allowing computers to learn certain skills, such as recognition patterns, and specific objects in photos.

Echioni also talked about why 10-year-olds are smarter than DeepMind's AlphaGo, and why humans need to develop artificial intelligence "guardian" programs to ensure that other artificial intelligence systems do not pose a danger to humans.

CEO of Microsoft Allen Institute said: artificial intelligence will not pose a threat to human survival

Allen Institute CEO Olin Echooni

The following is the main content of the interview:

Q: Is there any disagreement between researchers about the best way to develop artificial intelligence technology?

Echioni: We have made real progress in the fields of speech recognition, driverless cars, and of course AlphaGo. These are real technical achievements. However, how do we express it? Deep learning is obviously a very valuable technology, but there are many other problems to be solved in the process of developing artificial intelligence, such as reasoning (which means that the machine can understand 2+2=4 , not just based on rules to calculate), and master the background to establish a context. Natural language understanding is another case. Even if we already have AlphaGo, we can't develop a software that reads and fully understands an article paragraph, or a simple sentence.

Q: There is a view that with regard to artificial intelligence, deep learning is "the best means we have mastered." Is this not a good optimistic about deep learning technology?

Echoone: If you have a lot of data that has been tagged, let the computer know what it means, and if you have a lot of computing resources and try to find patterns from those data, then we have seen that depth Learning is unbeatable. As far as AlphaGo is concerned, this system has processed 30 million chess pieces, so that it is smart to know what kind of method is optimal under different circumstances. Other situations are similar, such as radiographic images of hospitals. If we can mark many images as tumors or no tumors, the deep learning software can judge for itself whether there is a tumor on the unknown picture. There are many things you can do with deep learning. This is a leading technology.

Q: So what is the problem?

Echooni: The problem is that there are many kinds of intelligence, not just that you use big data to train a program. Imagine standardized tests such as the SAT or college entrance exams. It is impossible for software to observe the previous 30 million exams marked as "successful" or "unsuccessful" to score the highest score. This is a more complex process that requires interactive learning. Intelligence also includes the ability to pass suggestions, conversations, and books. Despite the great progress in deep learning, we have not been able to develop a software that can do what a 10-year-old child can do by picking up a book, reading one of the chapters, and then answering about it. Part of the content issue.

Q: So, how can artificial intelligence pass the standardized test?

Echioni: At the Allen Institute, we have carried out research projects in this area. Last year, we announced a $50,000 bonus. Anyone who can develop artificial intelligence software to pass the 8th grade standard science exam will win the prize. More than 780 teams from around the world took part in the project and tried it for a few months, but no artificial intelligence system developed by any team can reach 60 points, even if it is just a multiple-choice question in the 8th grade exam. This shows our status quo.

Q: What is the ability of the top AI system to answer questions correctly?

Echioni: Usually, the language will contain clues. The most advanced systems use information from scientific textbooks and other open sources, and search through powerful information retrieval techniques to find the best answer for each multiple-choice question. For example, what are the best conductive materials below? Options include plastic spoons, wood forks, and iron bars. The program is very good at using formulas. They can be found in many documents where electricity or electricity is often associated with iron and not with plastic. So in some cases, the program can take shortcuts and find the answer. This is similar to the children's guess. Since no system reaches 60 points, I can assume that these programs use statistical methods to make educated guesses rather than reasoning.

Q: The DeepMind team that developed AlphaGo is currently developing artificial intelligence programs that use deep memory systems to break through deep learning. What impact will their work have on developing artificial intelligence that is closer to the human brain?

Echioni: DeepMind is still the leader in driving the development of deep neural networks. This contribution is an important step in the development of artificial intelligence reasoning, but it is also a small step that links facts through the structure of the map, such as the mapping of the subway system. The most powerful software available today can easily accomplish the same task, but the achievement here is that neural networks can learn how to complete tasks based on cases. This is worthy of a "Nature" paper. Overall, this is a big step for DeepMind, but it is a small step for humans.

Q: Is it possible to combine different technologies, such as deep learning, machine vision and memory, to develop a more complete artificial intelligence system?

Echioni: This is a very attractive concept. When I was a professor at the University of Washington, I had a lot of research in this area, and the basic concept was to use the Internet as a database for artificial intelligence systems. We developed a technique called Open Information Extraction, cataloging 5 billion web pages, extracting statements from it, and trying to turn it into knowledge that can guide machine operations. The machine has supernatural ability to scan web pages and get all the statements. The problem is that the statement is in text or in a picture. As human beings, our brains have the ability to mark and match operations with reasoning, and computer scientists have yet to discover the mysteries. This universal database and artificial intelligence interface only exists in science fiction, because we have not yet determined how to transform text and images into machine-understandable forms, just like human understanding.

Q: You mentioned that artificial intelligence at the human brain level will take at least 25 years to develop. What is your definition of human brain level artificial intelligence, and why is this time point raised?

Echioni: The true understanding of natural language, the breadth and universality of human intelligence, we can learn to play chess and know how to cross the road, and how to make food, this diversity is the characteristics of human intelligence. . All we are doing now is to develop artificial intelligence with a narrow range and only do a good job in one aspect. On how to get this point in time, I asked my colleagues at the Artificial Intelligence Development Association when we can develop a computer system that is as smart as humans. No one believes that such a system can be developed in 10 years; 67% believe that time will be 25 years or more; and 25% think that such a system will never be realized. Are they likely to make a mistake? Yes. But who should you trust? Is it a participant in industry development, or Hollywood?

Q: Why are many well-respected scientists and engineers warning that artificial intelligence will threaten humans?

Echoone: I have a hard time understanding that Stephen Hawking and Elon Musk discuss the motivations of universal artificial intelligence. My guess is that discussing black holes may seem boring, which is a slow progress. What I can say is that when they discuss the threat of artificial intelligence or the catastrophic consequences with Bill Gates and others, they always add modifiers, that is, this "final" will happen, or " Can "take it." I agree with this. But what we are discussing may be the future of thousands of years, or even the future that will never come. Is it possible for artificial intelligence to bring end to humanity? There is such a possibility. But I don't think that this long-term discussion should affect our attention to practical issues, such as artificial intelligence and employment, as well as artificial intelligence and weapon systems. Modifiers such as "final" and "theoretical" are often lost in the transmission of meaning.

Q: Given the shortcomings of artificial intelligence, should people worry about automakers developing unmanned efforts?

Echooni: I am not keen on driverless cars without steering wheels and brake pedals. After getting familiar with the situation of computer vision and artificial intelligence, I am not flustered about it. However, I support a fused system, such as a system that can automatically brake you when you are dozing off. The combination of a manual driver and an automated system will be safer than a single party. This is not simple. Introducing new technologies and integrating them into the way people work and live is not easy. But I can't assert that the best solution is to let the car do everything on its own.

Q: Google (microblogging), Facebook and other well-known technology companies have launched the "Artificial Intelligence for Human and Social Partnerships" project to explore the best behaviors of ethics and sociology in the process of artificial intelligence research. Is technological advancement enough to bring meaningful dialogue?

Echooni: The world's leading technology companies have come together to think about such issues, which is a good idea. I think they did this in response to outside concerns about whether artificial intelligence will take over the world. Many of the concerns are overstated. Even if we have a driverless car, there will be no 100 unmanned cars coming together, saying, "Let us occupy the White House." The threats discussed by Musk et al. will take decades to centuries. However, we are facing some real problems: automation, digital technology and artificial intelligence as a whole will affect employment, whether it is because of robots or something else, and this is the real problem. Driverless cars and trucks will greatly improve safety, but they will also affect many workers who make a living by driving. Another problem is the discriminatory nature of artificial intelligence. If artificial intelligence technology processes loan or credit card applications, will they do it in a completely legal way, or will it be done in an ethical manner?

Q: How do you guarantee that the behavior of artificial intelligence programs is legal and ethical?

Echoone: If you are a bank and use software programs to process loans, you can't hide behind it. Simply put, this decision is made by the computer, this is not an interface. Even if you do not use race or gender as a variable, computer programs may be discriminatory. Since the program has access to many variables and statistics, they may find that there is a correlation between the zip code and other variables, and thus replace the race or gender with such variables. If the program uses these surrogate variables to influence the decision, then it is a problem, and it is difficult for humans to detect or track. So our advice is to develop artificial intelligence "guards." Such artificial intelligence systems can monitor and analyze the behavior of artificial intelligence-based loan processing programs, ensuring that they comply with the law and are ethical in development.

Q: Does the current artificial intelligence guard exist?

Echioni: We have made an appeal to the research community to promote research and development of such technologies. I think there may be a small amount of research at the moment, but at this stage it is only our goal. We hope that the concept of artificial intelligence guards can clarify the image of artificial intelligence that is rendered in Hollywood, such as the "terminator", that is, technology is a demon, it is the power of coldness.

Polymer Lithium Battery

A lithium polymer battery, or more correctly lithium-ion polymer battery (abbreviated as LiPo, LIP, Li-poly, lithium-poly and others), is a rechargeable battery of lithium-ion technology using a polymer electrolyte instead of a liquid electrolyte. High conductivity semisolid (gel) polymers form this electrolyte.High power. Both lithium-ion and lithium-poly batteries are suitable with high and robust power usages. However, lithium-ion batteries are more efficient and popular than lithium-polymer.Many manufacturers have stated that their LiPo batteries will last 2 or 3 years. This is a somewhat realistic approximation for a scenario where a battery is regularly used and charged around 2 or 3 times a week.Lithium polymer batteries could potentially be dangerous. ... However, if care is taken to store the batteries properly, Lithium batteries shouldn't be very dangerous at all. Additionally, Lithium polymer batteries should be even safer than traditional Lithium batteries.Lithium Polymer batteries are perhaps the most hazardous parts we deal with on a day to day basis when it comes to our miniquads. This is both because of their propensity to explode into a small fireball when mistreated and because of the massive amount of power they are capable of dumping out of their terminals.

Li-Polymer Battery,Polymer Battery,Phosphate Battery Lifepo4 Battery,Polymer Lithium Battery

Shenzhen Glida Electronics Co., Ltd. , https://www.szglida.com

Posted on