The Five Senses

Artificial Intelligence explores the exciting world of sensory AI, which aims to replicate human senses (1) sight, (2) sound, (3) smell, (4) taste, and (5) touch in machines. It highlights how this technology can enhance various applications, making them smarter and more efficient. For instance, in vision, AI-powered cameras can recognize objects and learn continuously without constant internet connectivity. This is especially useful in remote areas where traditional processing might not be feasible. Drones equipped with visual sensors can monitor agricultural conditions or perform safety inspections, showcasing how sensory AI can directly impact industries. When it comes to sound, a point is made on how smart microphones can detect and analyze noises to monitor equipment health or even stop machines before they fail. In healthcare, olfactory sensors are making strides by analyzing breath to diagnose diseases like cancer or Parkinson’s. Taste is tackled through “electronic tongues,” which help ensure food safety and quality without the subjectivity of human testers. Lastly, touch technology is being integrated into machines for better interaction with their environments, like adjusting driving behavior in autonomous vehicles based on road conditions. Overall, the article emphasizes that with advancements like the Akida processor, sensory AI is not just about mimicking human senses but also about enhancing real-world applications across various fields.

The beginnings of AI in the 1950s

In the 1950s, computing machines essentially functioned as large-scale calculators. In fact, when organizations like NASA needed the answer to specific calculations, like the trajectory of a rocket launch, they more regularly turned to human “computers” or teams of women tasked with solving those complex equations.  Long before computing machines became the modern devices they are today, a mathematician and computer scientist envisioned the possibility of artificial intelligence. This is where AI’s origins really began.

Alan Turing

At a time when computing power was still largely reliant on human brains, the British mathematician Alan Turing imagined a machine capable of advancing far past its original programming. To Turing, a computing machine would initially be coded to work according to that program but could expand beyond its original functions. At the time, Turing lacked the technology to prove his theory because computing machines had not advanced to that point, but he’s credited with conceptualizing artificial intelligence before it came to be called that. He also developed a means for assessing whether a machine thinks on par with a human, which he called “the imitation game” but is now more popularly called “the Turing test.”

ELIZA

Created by the MIT computer scientist Joseph Weizenbaum in 1966, ELIZA is widely considered the first chatbot and was intended to simulate therapy by repurposing the answers users gave into questions that prompted further conversation—also known as the Rogerian argument. Weizenbaum believed that rather rudimentary back-and-forth would prove the simplistic state of machine intelligence. Instead, many users came to believe they were talking to a human professional. In a research paper, Weizenbaum explained, “Some subjects have been very hard to convince that ELIZA…is not human.”

IBM Watson

Many years after IBM’s Deep Blue program successfully beat the world chess champion, the company created another competitive computer system in 2011 that would go on to play the hit US quiz show Jeopardy. In the lead-up to its debut, Watson Deep QA was fed data from encyclopedias and across the internet. Watson was designed to receive natural language questions and respond accordingly, which it used to beat two of the show’s most formidable all-time champions, Ken Jennings and Brad Rutter.

Siri and Alexa

During a presentation about its iPhone product in 2011, Apple showcased a new feature: a virtual assistant named Siri. Three years later, Amazon released its proprietary virtual assistant named Alexa. Both had natural language processing capabilities that could understand a spoken question and respond with an answer. Yet, they still contained limitations. Known as “command-and-control systems,” Siri and Alexa are programmed to understand a lengthy list of questions but cannot answer anything that falls outside their purview.

First driverless car

Ernst Dickmanns, a scientist working in Germany, invented the first self-driving car in 1986. Technically a Mercedes van that had been outfitted with a computer system and sensors to read the environment, the vehicle could only drive on roads without other cars and passengers. While the car was a far cry from the autonomous vehicles many imagine when thinking about AI-driven cars, Dickmanns’ car was an important step toward that (still unrealized) dream.

Deep Blue

In 1996, IBM had its computer system Deep Blue—a chess-playing computer program—compete against then-world chess champion Gary Kasparov in a six-game match-up. At the time, Deep Blue won only one of the six games, but the following year, it won the rematch. In fact, it took only 19 moves to win the final game.Deep Blue didn’t have the functionality of today’s generative AI, but it could process information at a rate far faster than the human brain. In one second, it could review 200 million potential chess moves.

Artificial intelligence

Artificial intelligence (AI) is the design and study of systems that appear to mimic intelligent behaviour. Some AI applications are based on rules. More often now, AI applications are built using machine learning that is said to ‘learn’ from examples in the form of data. For example, some AI applications are built to answer questions or help diagnose illnesses. Other AI applications could be built for harmful purposes, such as spreading fake news. AI applications do not think. AI applications are built to carry out tasks in a way that appears to be intelligent. Generative AI is a type of artificial intelligence that can create new content, such as text, images, and music, based on patterns learned from existing data. It works by learning the underlying structures and relationships within data and then using that knowledge to generate novel outputs. Think of it as an AI that can “imagine” new possibilities based on what it’s already seen.

What Is AI in Decision Making

AI decision-making is the process of using artificial intelligence to make an informed decision by analysing large datasets, identifying patterns, and predicting outcomes. It integrates advanced technologies such as machine learning (ML), natural language processing (NLP), and deep learning. AI decision-making also includes different types of systems, including generative AI or AI Agents. Think of AI decision-making like a GPS navigation system for businesses. Just as a GPS analyses traffic data, past travel patterns, and alternate routes to determine the best path, AI technology sifts through massive amounts of structured and unstructured data to make strategic decisions.  By continuously learning and adapting from new data, AI decision-making can improve over time, enabling systems to offer more accurate insights.

Scroll to Top