Robotics and Artificial Intelligence

Ullas Ponnadi

Evolution of robotics and artificial intelligence is leading to the prediction that in the near future, human-created computer power would exceed the capabilities of the human brain. That reality has enormous potential for a wide range of applications and could also be a turning point that stretches the capability and relevance of human intelligence.

Introduction
Thanks to the entertainment industry, which made the term ‘robots’ popular, we tend to think of robots as talking, walking, human-like creatures, that can intelligently do all of our tasks, while we sit and watch television or go and play with our kids. Or possibly, be at a musical or a play in a theatre, while our mundane daily chores are all being taken care of!

Is that what a robot really is? What does Robotics really mean? And what is AI, an acronym for Artificial Intelligence?

Let us discover all of that, as we read through this article.

humanoid Robot
This term was first coined by Czech writer, Karel Čapek in his science fiction play R.U.R. ‘Rossum’s Universal Robots’ (1920). It translates to forced labour in English.

The play is about a factory that makes people who are artificial, via synthetic organic materials. These people were called Roboti (robots), and are closer to cyborgs and clones of the current day. They could think for themselves. Initially, they were happy to be controlled by humans, but a rebellion by these robots eventually leads to the extinction of the human race!

Robotics, as a branch of science and engineering, represents a group of human-created machines that can aid and assist human tasks, in a far more efficient manner, via high quality engineering design and control systems, that allow such machines to take intelligent actions. Such machines are programmed in advance and can operate automatically or via remote control.

AI or artificial intelligence
The origin or conceptualization of AI began long ago and one can see the thought process in mythology where sculptors and painters were able to create beautiful creations and literally “breathe life into them”.

Modern AI that originated in the 1930s happened when the process of human thinking was converted as the mechanical manipulation of symbols. This led to the first programmable digital computer in the 1940s; a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin discussing seriously the possibility of building an electronic brain.

Evolution of robots and AI
Computing and computers have evolved from the 1940s to far more powerful machines. The most common gadget that we use these days, be it the smartphone or the tablet, is an ultra-powerful computer, that can be used via applications to become the brains for the AI we are speaking about.

In parallel, robots and robotics have evolved from the character in the play of Capek’s book, and the visuals that we see in movies and cartoons, to machines, that actually look like living beings that are created by nature. Today, you can see robots that look like insects, dogs, human beings, birds, and serpents, and these are being used across the world for various purposes.

We are now at that transition point, where we can combine the power of robotics and AI to create super intelligent machines.

The science of robotics
Robotics is an amalgamation of physics, math, and often biology and chemistry. The design of a robot, as stated earlier, mimics life as created by nature; be it insects, or birds, or animals, or humans. The choice of such a design is based on the application that is the target of such a design. Hence, every aspect of such an application has to be understood and built in into the design process.

What really makes this such a fascinating and complex topic is: what we consider as simple in nature is an extreme adaptation and efficient design that has been perfected over millions of years. For example, if we consider the simple act of bending our arms, and examine the physics behind it, you will see that each joint has to act in unison, controlled by signals from the brain, transported via signals through nerve channels to each such joint, that collectively must then decide how much the hand has to bend, by spending the least amount of energy. The decision to bend and the extent and direction of bending is decided via feedback mechanisms, either via the eye or other sensory organs (for example: the skin, if the intent is to scratch an itch). Such feedback mechanisms have to be continuously transported to the brain, for the hand to reach the correct place, and perform the right action.

It is this complex design, that is so inconspicuous, that needs to be recreated in a robot.

The physics and math of robotics and AI
The physics, in a specific scenario, is as described above. If we expand the scope and then want to design robots that move like an insect, fly like a bird, swim like a fish, or walk like a human, or in the more evolved form, emote like a human being, one needs to study and understand the steps in each such physical and biological process and then understand how to design that in a machine.

Drones-with-Camera Let us take the hummingbird as an example. A bird that flaps its wings 50 times per second, in a tiny frame, and can hover for a long time, is a miracle of nature’s creation! The closest to this is what is called a drone; essentially robots that can hover and fly and do interesting tasks for humans (an example being overhead movie shots). While the physics of hummingbird movement is still being studied, technology in the field of engineering has created sufficient methods via which a drone can hover and fly for about 30 minutes on battery power and move in a 3D space, and then as an application, capture images and videos and send them to computers on the ground.

AI is usually needed for the more complex robots, especially the ones that are closer to the human being, in terms of decision-making. It is certainly also required in robots that move and fly, to make clear decisions on how far to go without colliding, etc.

The human form is the most complex. A perfect example is Nadine, a humanoid with emotional intelligence. Nadine is a robot that looks like a human and is powered by technology similar to Apple’s Siri. She can respond to certain prompts and react to human interaction and tone. Nadine has inbuilt extensive storage memory that allows her to recognize people from the past, and recall conversations. Almost like an old friend whom we have met and spent time with!

This is complex AI, as in speech recognition and controlling speech response via emotions. AI is nothing but an application of mathematics that in its evolved form is known as algorithms and decision-making trees, the fundamentals of which we learn at the school level as algebra, flow charts, and matrix computation.

The term Singularity is a defining moment that predicts that the evolution of AI will create a stage when self-improvement cycles would create a massive explosion of AI that will result in a super-intelligence that would surpass all human intelligence. It also has a not so pleasant after effect in that this new super AI would automatically continue to upgrade itself, without human intervention, and hence could signal the end of the human intelligence era!

Experts in this field predict this to happen between 2030 and 2045, with a median value of 2040.

Robotics and math
We saw some math as above in the case of Nadine. In case of an autonomous robot, essentially the robot can move or fly by itself. The math of such motion is defined by an algorithm called SLAM. SLAM stands for Simultaneous Localization and Mapping – a complex term that in simpler terms translates to providing information to the robot, on where I am and where I should go next.

This is the navigation and movement part of it. The tasks as in grasping an object, defusing a bomb, picking and placing objects, speaking to a human, etc., are different and are guided by basic principles of physics and math combined with some level of AI. We spoke about conversations and emotions. For other tasks, for example, if a robot were to hold an egg without breaking, and if the same arm has to grasp a hammer to strike a nail, the applied pressure is so vastly different that sensors have to be designed to handle both tasks, with equal ease, in the same arm!

Summary
A fascinating area of study and research, that aspires to mimic life and solve real life problems is what robotics is all about. Combined with AI, such design can solve complex problems in the areas as described above, and many more that are evolving.

What does the future hold for robotics then?
Well, only time can tell. Bill Gates spoke about, “A robot in every home” in an article in Scientific American published not so long ago. That may not be such a farfetched thought, after all!

Where do students study about robotics?

There are no dearth of programs that are built for schools and students to study robotics and its components. Most such programs cover grades 3 to 12. Such programs allow the student to experiment with the basic components, build robots, program them, and then design the robots for simple applications, and also various robotics competitions.

More serious hobby clubs and exhibitions encourage and motivate students to solve real life problems using robotics.

Examples of companies that offer such programs in India are: 1) Lego Mindstorm, 2) Beebox, 3) Nex Robotics, 4) Think Labs

There are many others. For students who are fascinated to build on their own, there are open platforms and tools available. One of the best examples of this is ROS: http://www.ros.org/

ROS partners with the best universities and researchers and platforms to create interesting learning curriculum and also to create complex real life solutions, in the field of robotics.

Some examples are here: http://www.ros.org/core-components/

turtlebot_with_cookies Turtebot and ROS are great examples of how this framework can be taken to schools and students.

What opportunities exist for students in this field?
Robotics as an area of study is fairly mature and is now an undergrad level subject in most of the western countries. Within India, there are colleges that offer mechatronics as a stream at the undergrad level.

For a student who wants to make a career in this area, some of the best universities to study are Carnegie Mellon, MIT, University of Georgia, Georgia Institute of Technology, and the University of Southern California. There are many more spread across Europe, and Asia that offer excellent programs.

Closer home, the premium IITs and some of the higher ranked engineering institutes offer mechatronics as an area of study. Most good quality robotics research in India is offered at the master’s level by such institutes.

Advanced research and application specific work is also being done in India currently; within the defence departments and industries that are focusing on robotics applications. Many new startups have also sprung up and they are focusing on real life industrial, medical, and service areas of robotics.

As was hinted at the beginning of this article, robotics combined with AI is already a reasonably mature field to study and create a career with. It demands in-depth knowledge of physics and math to begin with kinematics, mechanics, AI, imaging and algorithms, as one goes deeper technically and several sociological and psychological skills and knowledge, as one aspires to design robots that are of social use to humans. This is a truly multi-disciplinary area of study, research and application, for an aspiring student.

References

The author is the Director and CTO of CREATNLRN, a venture focussing on creating an adaptive and interactive learning platform for high school students. He can be reached at uponnadi@gmail.com.

Leave a Reply