Using AI for the Good of Mankind: Keeping Reality with Educated AI

Intelligence has always been a marker that defines humanity. Now he also defines machines and how they interact with us and the world. They still can’t do the things we can do with our adaptable intelligence, and for now, artificial intelligence (AI) remains confined to fairly small and specific tasks. Educated AI is a field of AI that takes a machine learning approach that is dynamic and responsive to the environment, but that is tailored to specific applications and tasks. It is an intelligence that learns by trial and error to form a hands-on approach that solves real-world problems to make life better and more efficient.
AI, especially narrow artificial intelligence (ANI), where AI is really good at a task, has become an extension of human intelligence. Our smartphones are capable of performing routine tasks such as saving people’s memories as stored photos, showing us locations with maps and GPS, recommending books and music, and tracking our preferences. With network technologies and the prevalence of the Internet cementing this trend, other ANI technologies like driverless vehicles and domestic robots will see the worlds of human intelligence and machine become more intertwined. When applied to practical problems, educated AI is a catalyst for ANI.
Educated AI does not seek to replicate human intelligence, but rather is tied to five parameters:
Application specific: It uses different intelligent systems with different applications to perform different tasks where intelligence is only measured by the ability to perform those target tasks. For example, an AI home management system or AI tutor are only smart in their own areas; for example, a child who asks the first for help with math homework may be recommended for an upgrade or a different system. Application-specific AI dramatically reduces the risk of errors.
Human-centered: Usable intelligence must be understandable and predictable for a human. Likewise, AI must share a person’s value network; for example, technology needs to understand how people will react to its actions.
User instructed: The system is relatively autonomous, but users can quickly teach it new environments; for example, familiarizing a smart machine with your home may allow the AI brain to remember the layout, but the system can decide what to do without further instruction.
Self-learning: Instructed by a user who is in the machine learning loop, the system learns both the commands and the models. It can correct errors, make judgments based on its environment, and provide the user with recommendations and reminders.
Customized: Educated AI is designed to improve the user experience in specific applications. Although it is not designed to replicate human capabilities, it can make decisions in a dynamic environment rather than simply performing repetitive tasks; for example, an AI home manager may decide to adjust the temperature or close a window based on the analysis of sensor statistics and user habits.
At the stage of augmented innovation, better results will be produced when the human brain works alongside an educated AI system that can interact with its environment. Professor Pieter Abbeel of UC Berkeley gave an example when he trained the BRETT robot in a series of motor skills that could be applied to motor tasks such as putting a hanger on a rack, assembling a toy airplane and Lego blocks, and screw a cap onto a water bottle. Previously, the Herculean tasks for a computer, the robot would accomplish without pre-programming, instead applying the human security approach: trial and error. Professor Abbeel told the Berkeley News that: “The key is that when a robot is faced with something new, we won’t have to reprogram it. The same software, which encodes how the robot can learn, was used to allow the robot to learn all the different tasks that we gave it.
In some ways, this mirrors Professor Andy Clark’s concept of the extended mind for humans, where consciousness is embedded in the interaction with what surrounds us. The University of Edinburgh professor used the example of a child doing arithmetic using his fingers, which is actually part of the cognitive process. Thus, cognition is not bound by three pounds of brain tissue; rather, it flows into the environment. In Professor Abbeel’s project, BRETT also interacts with its environment to learn how to perform a range of trial-and-error tasks using a single artificial neural network.
Another example is driverless cars – it may take several months to build a car that runs on its own, but it will likely take years, if not decades, to perfect autonomous technology, as it is impossible to exhaust all of the technology. possible scenarios in traditional programming. A more effective way is to teach driving by giving a large number of examples and letting the machine generalize the patterns rather than using an “if-then” model which fails in the face of endless scenarios.
A third example is natural language processing (NLP) which allows computers to use language as well as humans. NLP is extremely difficult for two reasons: First, understanding language theoretically requires in-depth reasoning skills and knowledge. Second, everything a computer does has to be represented by a mathematical model. The key problem is how to represent all knowledge in one language in a way that allows the program to reason and apply that knowledge in other areas.
In each case, AI is designed to excel at a particular task through a process of deep learning and interacting with its environment to improve life.