Smartphone + Machine Learning = The Future of Personal Terminals

Today is the day when the iPhone 7 was released. Whether you are a fruit or not, whether you are actively involved or passively accepted, there is only one focal point for all people. That is Apple. As a long-term leader in technological innovation, Apple has brought us all kinds of unexpected new experiences again and again, even more so today's iPhone 7. So let's start with a little smart phone and talk about how machine learning will change the future of personal terminals.

Although new products and new features emerge in an endless stream, one cannot help wondering what factors will become impossible.

The answer probably comes down to four words: "machine learning."

iPhone 7 conference

Whether we really realize it or not, machine learning has been used for a long time in our daily life. In fact, we didn’t notice that its existence meant that this technique was very effective because it was not acceptable to learn to learn such a large amount of real-time data every day in front of the user. Obviously, this approach is acceptable. However, the recent word has frequently appeared in various commercial and mass media, causing a lot of in-depth discussion among artificial intelligence professional technicians and consumers.

Apple has long established a solid foundation in the field of artificial intelligence. In an article published by Steven Levi on iBrain, it provides an in-depth analysis of Apple's intricate machine learning techniques. Although Siri is only Apple's "facade" in machine learning, there is no doubt that Apple's R & D in this area does not stop there. Machine learning technology has been applied to Apple's various devices and applications. For example, swipe the screen to see the list of apps you want to open, or indicate the location of your booked hotel on the map. This direct-to-consumer artificial intelligence application sets a benchmark in the tech industry. It not only successfully increases the value of the brand, but also allows consumers to expect more from the digital experience.

Siri

Although Siri is a very popular artificial intelligence application, she is not without competitors. The emergence of virtual assistant Cortana (Microsoft Xiaona) has exacerbated the competition among technology companies. It is not yet known who is the winner. This brought tremendous pressure on Apple. Apple quickly realized the problem and began to take measures to strengthen their machine learning department. The purpose was to maintain a leading position in this area, especially before the launch of new products. . The most recent example is Apple's acquisition of Turi, an artificial intelligence company specializing in machine learning. In addition, Apple also announced that in addition to Apple phones already equipped with Siri features, they also intend to integrate the deep learning technology behind Siri into Apple's laptops, watches and televisions.

iWatch

The far-reaching impact is that these efforts are to apply machine learning to Apple's entire product chain, which marks that Apple's brand and retail will begin a full range of personalized experience services.

Users have begun to expect advanced customized content based on deep learning of their real-time behavioral responses. Apple has already realized that only machine learning can meet such large-scale user needs. Apple enhances content accuracy and timeliness through enhanced machine learning algorithms, providing users with one-on-one personalized experience services that will ultimately translate into company brand loyalty and increase revenue for the company.

Consumer expectations for personalized experience services will only increase, and Apple has made it clear that they are trying to meet these needs. This also means that in the face of competition, it is necessary to constantly strive to strengthen its own deep learning technology in order to keep up with development. This applies not only to Apple’s competitors but also to its partners.

By integrating machine learning into Apple’s products, brands and retailers can easily provide consumers with the shopping experience they have been looking forward to.

Those who dare not take risks will eventually be eliminated by the times.

GPU chip

Specifically, research on neural networks in computers and other methods in machine learning has been started since the 1970s. Deep learning is a part of machine learning. It uses algorithms to correlate and classify data. Deep learning systems usually require the use of complex neural networks and a large amount of computing resources. The GPU chip is a kind of chip that is specially used for image computing. It is very common in personal terminal devices with screens. The neural network is mostly run on the GPU.

The MIT research team developed a chip called Eyeriss that reduced energy consumption to 1/10 of the GPU average. Therefore, this opens up new possibilities for applications on smartphones that can execute powerful artificial intelligence algorithms directly on mobile devices without uploading data to the Internet for cloud computing. This computer chip optimized for deep learning makes artificial intelligence more popular.

The 168-core chip developed by MIT recognizes faces, other objects, and even sounds. The chip can be applied to smart phones, self-driving cars, robots, drones and other devices. Common GPU chips generally share a memory bank with many processing units. Each processing unit of the Eyeriss chip has its own memory, and it can compress data before sending data to the processing unit. Each Eyeriss processing unit can communicate directly with adjacent processing units. If data sharing is required, data does not need to be transferred to main memory.

With this new chip, the future smart phones can not only perform daily tasks better, but also can perform artificial intelligence and deep learning tasks that originally required the input of external resources. A smartphone with a built-in Eyeriss chip can perform more basic tasks, such as tracking user preferences, schedules, and usage patterns, to better optimize the mobile experience, which means a completely different user experience.

AI Assistant (with map: "Iron Man")

After we categorize daily application scenarios of ordinary users, smartphones equipped with AI assistants can clearly determine various user usage conditions. It is clear whether the game has been downloaded in the app store, or the installed application is updated at a glance. . The chip will look at some mobile phone information, such as the size of the application, code characteristics, statistics of user usage, online opinions, etc., and recommend to the user something that may be of interest. For example, a user suddenly wants to find a bar while strolling in the center of the city. They may say to the onboard AI assistant: "I recommend a good bar that I haven't been to." At this time, the AI ​​will start to check the bank statement. , determine how often the user goes to the bar, how much money is spent each time, and then find keywords to comment on these bars, such as "good atmosphere" or "beer is good", and finally find a nearby new bar to recommend to the user. At present, all these calculations are handled by remote servers in a unified manner and then transmitted back to the user's mobile phone. It is difficult to integrate other data and applications on the device and effectively manage user data.

Airborne AI assistants will completely change the development of personal terminal computing devices, first and foremost the smart phones.

Recommended reading:

Through the Facebook Moments app, LeCun takes us on computer empathy

Decrypt the Secret Weapon Behind Persado: Machine Learning How to Create Marketing Content

Posted on