dkspost parts 12-16

12th part:

There is a lot of concerns coming from various persons reading these posts about how the world will be in the years to come. Intelligence is defined as the ability to think, decide and apply existing and derived knowledge and skills in order to accomplish complex goals with unanticipated events and partial data that occurs in our life. As told by Howard Gardner intelligence has many types and it cannot be measured with a single IQ test. Today’s AI is narrow but growing stronger. There are multiple avatars of AI. Some Examples are: protector AI, pious AI, art AI, knowledge AI, gaming, virtual AI etc., There is a lot of concerns coming from various persons reading these posts about how the world will be in the years to come. Will machines dominate humans? Will we become slaves? Will there be jobs for people? If so what kind of jobs? Weapons on drones will lead to destruction or a safe world? Will the current inequalities decrease or increase? How life, freedom, human intelligence will be impacted? How will people collaborate with robots? Is it safe to work and collaborate with robots? How do we ensure safe and beneficial AI?

We do not have answers to these and many other questions. Utopians think every thing will be good. Dystopian think world will collapse

Luddites think technology will kill itself. Technical skeptics say AI will not replace our mind. Roger Penrose the recent Nobel Prize winner says strong AI will not do everything a human mind does? It will not become a thinking machine. I gave a talk in 1967 on whether computers will become thinking machines at IISc. I said it will not become a thinking machine. This despite the prediction by Irving  Good, associate of Alan Turing the first person to talk about AI. Let see the quote.

“the British mathematician Irving Good back in 1965: “Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make .” 

Stephen Hawking Warned That Rise of Robots May Be Disastrous for Mankind.”

Only time will get us answers but one thing is clear that we need to build enough intelligence into AI systems so that they are safe and all actions are benevolent.

13th part – book by Max Tegmark:

Since AI will have serious impacts on our life, we need to direct its path and help in setting up procedures for beneficial AI.

Let us start with the well written critique- book by Max Tegmark titled ‘ Life 3.0 Being human in the age of AI. He says about three stages of life as

Life 1.0 is unable to redesign either its hardware or its software during its lifetime: both are determined by its DNA, and change only through evolution over many generations. In contrast, Life 2.0 can redesign much of its software: humans can learn complex new skills—for example, languages, sports and professions—and can fundamentally update their worldview and goals. Life 3.0, which doesn’t yet exist on Earth, can dramatically redesign not only its software, but its hardware as well, rather than having to wait for it to gradually evolve over generations.”. He also has a futuristic possible view of human designed AI taking over commerce, entertainment, and creating a huge wealth and then going on to take over governments with assurances of no taxes and basic income to all. An AI based communism is seen.

One of today’s most prominent cyborg proponents is Ray Kurzweil. In his book The Singularity Is Near, he argues that the natural continuation of this trend is using nanobots, intelligent biofeedback systems and other technology to replace first our digestive and endocrine systems, our blood and our hearts by the early 2030s, and then move on to upgrading our skeletons, skin, brains and the rest of our bodies during the next two decades.

 Well, AI developments cannot be stopped, but directed and slowed. Elon Musk is on the opposite side of Larry Page, Google founder and a strong believer in AI. Elon Musk cautions us. “Elon Musk argued that what we need right now from governments isn’t oversight but insight: specifically, technically capable people in government positions who can monitor AI’s progress and steer it if warranted down the road.”

So, values and ethics will dominate. Education should give primacy for values and ethics. Plato emphasized that selection of officers should be based on the values and integrity of people rather than knowledge. Knowledge can be acquired but values need to be inculcated from childhood.

14th part – Tegmark continued:

One immediate impact of robots and AI is a large-scale job loss, a worrisome aspect for us. Tegmark poses three questions to decide on career and job situations in the near future.

They are

Does it require interacting with people and using social intelligence?

Does it involve creativity and coming up with clever solutions?”

“Does it require working in an unpredictable environment?”

He gives this advice to kids.

Career advice for today’s kids: Go into professions that machines are bad at—those involving people, unpredictability and creativity.”

15th Part – my comments on AI concerns:

There is a lot of opinions on AI. Some are scared. Some feel humans will be superior. Some have concerns. Couple of points deserve our attention.

1.Technological development cannot be stopped. It can be slowed down by government policies. It can be redirected to some extent not a lot.

2. Good and bad or evil will be present in the society. We cannot control it. Because both are relative terms. What is good to me may not be good for many others. Is it possible to stress on tolerance by all? Can we do something to minimize hatred we see in India?

3. Societal living is changing. More isolationism is happening. Is it good in the long run needs to be debated? How do we balance freedom of individual with cooperation and tolerance?

Time will provide answers.

4. We are intelligent and self-correcting people. So if situation goes bad there will be corrections done by scientists, politicians, and people. Hope we don’t reach the precipice.

5. I see there is enormous amounts of discussions amongst AI researchers and famous scientists as well as industrialists. This is going on for decades and hundreds of books have come out. AI research despite the Dartmouth College discussion by eminent computer scientists, – John McCarthy (who coined the term “artificial intelligence”), Marvin Minsky, Nathaniel Rochester and Claude Shannon-

,is moving at a slow to moderate pace. Even now despite all optimism, we are at the beginning stage of AI only.

Fortunately, it looks like we will not go at the speed of atomic bomb development. That is a blessing for humanity.

6. Robots will misbehave. There is no 100% guarantee for Asimov’s assumptions. They along with drones will be used in warfare. These needs control.

6. Jobs are not easy to come by. But welfare state will come in and capitalism needs to deal with people’s welfare for its safety. Basic income is being discussed. We have economists like Picketty, Nobel winner Banerjee talking about poverty and trying to move away from supply and demand economics. Because unlike the belief of economists, economy is complicated many folds.

7.Education need to change. Basic values, ethics and habits are dominant and need to be taught. Education will be more home based.

8. People may not crowd big cities. There will be dispersal.

9. Strong, concerted, conscious and positive efforts are the need of the hour to simplify technology usage, protect people from frauds, and allow participation by all. Avoid exclusion of people in daily life. This means we have to unite and not divide.

10. Life is complicated. We need more cooperation and collaboration not less. Both isolation and cooperation will coexist.

The above opens up for more discussions.

Let us close this AI with a quote from Swami Vivekananda suggested by Mr P V Joshi:

“We are what our thoughts have made us; so, take care of what you think. Words are secondary. Thoughts live; they travel far.” Swami Vivekananda

Part 16 – closing comments:

There are still more comments on AI. I want to repeat that AI developments and use cannot be stopped. We did not stop nuclear bombs or armed drones. We are not a soft state as defined by Gunnar Myrdal but a fragile state. So decisions and consensus are tough to come by. So no question of stopping AI.

Most believe AI will not be better than humans in cognition. Life will not be the same.  It will need more effort and thinking from us. We need to prepare for changes. Welfarism will become important. Values will be the focus of education not materialism or technology.