For most people, AI (artificial intelligence) is a pipe dream that became a reality in their favorite movies. From Star Trek’s Data and Terminator T-800 to Ex Machina’s Ava, there has never been a lack of imagination when it comes to AI, whether they are humanity’s ally or foe. However, the definition of an AI is not something that is easily answered for a computer scientist. To be fair, AI do not necessarily think the same way as humans and the fact that most of them wear a human façade in the movies is simply a way of making them more ‘real’ and relatable as opposed to actually making them realistic.
For one, AI can be computer programs designed to engage in a specific mission with the sole task of finishing it perfectly. For example, AlphaGo was designed to beat the human world champion at Go and Watson was designed to beat humans at Jeopardy, though neither of these AI acted or ‘thought’ out its moves in the game through the way humans do. Instead, predetermined pattern recognition software aided the Ai in accurately guessing their opponents move in a way humans cannot. Rather than saying Ai think and execute tasks like humans, it would be more accurate to say they think and act rationally.
These days, scientists are defining AI as carrying out tasks that humans do automatically and subconsciously on computers. Converting common sense into binary code is one of the most difficult snags scientists have come across, despite the exponential growth of computer systems. Teaching machines how to live in 3D when their existence is based on converting all their sensory input into 0’s and 1’s is a task of no little importance. Newer AI are programmed to learn through interaction, however their shortcomings when it comes to making spontaneous choices has not gone unnoticed. The upside is that there are AIs that can handle certain tasks very well but their limited situational use is slated to expand soon.