Enterprise explains: Artificial Intelligence (AI)

A couple of weeks back, advisor for AI to the CIT Ministry Sally Radwan gave us the rundown on the government’s artificial intelligence (AI) strategy. AI is expected to contribute 7.7% to Egypt’s GDP by 2030 — meaning we had all better get to grips with just exactly what it is. AI could be explaining itself one day soonish — but in the meantime, here’s our (mostly) human-made explainer to help you buff up.

So what is AI? Put simply, AI is getting computers to act like humans in terms of problem-solving and decision-making. With the use of large datasets and computer programs, AI machines are made to mimic the intelligence and actions of human beings.

AI is used for a number of everyday activities. This includes speech recognition (like Siri and Alexa); customer service (think how many social media company pages use AI bots to handle their incoming customer requests); computer vision (as with image searching on Google); recommendations (like how Netflix and Spotify give you customized recs based on your prior activity); and automated stock trading (AI-driven trading platforms that can process thousands of trades a day without human intervention). On a more advanced level, AI is also used to steer driverless cars.

There are two types of AI: strong and weak. Don’t be fooled: weak AI is not necessarily the less powerful of the two, but rather refers to AI systems that are programmed only to fulfill specific tasks, like Siri or Alexa. Also called narrow AI, the tech is a set of advanced methods to analyze data and use it to extract patterns to predict and optimize certain values in a more sophisticated way than traditional data analysis tools can, Radwan told us last week. Strong AI, meanwhile, refers to the concept of AI systems that have an intelligence equal to humans — which don’t yet exist. Strong AI would boast a self-aware consciousness with the ability to solve problems, learn, and plan for the future, IBM writes.

SOUND SMART- Egypt’s national AI strategy focuses on weak or narrow AI, Radwan said. A key focus of the government’s efforts in narrow AI is final human determination (FHD), which means the human is always in control. Check out how the strategy has been applied for different sectors in our talk with Radwan here.

Same, same, but different: AI vs machine learning. Machine learning (ML) is a subset of AI, where machines learn from their past interactions and input data to create new things or make different decisions. For instance, the Google document on which this article was written makes suggestions as to what the next word or phrase could be while we type. Since we’ve been using Google docs on a company-wide level for a very long time, word suggestions are now more specific. If we write the first name of one of our team members, the document suggests their second name automatically. That is machine learning.

And deep learning is a subset of machine learning — its evolution, so to speak. It allows machines to deduce a course of action based on previous examples of similar situations. Unlike machine learning, deep learning systems are built like the human brain, using interconnected neural networks that influence each other. Deep learning machines don't need human intervention to tell them what to do with the massive amounts of data that are fed into them. They learn via previous experience. One practical application for deep learning is in translation services — machines can decipher what dialect is being spoken, for example.

There are of course concerns when it comes to AI, some of which are easily debunked. Fear #1: The singularity. One of the main fears of strong AI is the machines-overtake-humanity apocalypse scenario, as seen in countless movies and series. While this is just the stuff of fantasy for now, some AI researchers say there’s a 50% chance that machines could outsmart humans in the next 45 years. If things ever reach that stage, there’s really not much we can do except sit back and hope the robots decide to be kind. But freaking out now “is like being worried about overpopulation on Mars before we even have gotten a person on Mars,” one AI expert tells NBC.

Fear #2: Privacy. AI-driven consumer products “are frequently equipped with sensors that generate and collect vast amounts of data without the knowledge or consent of those in their proximity,” Privacy International writes. That’s a problem in itself, but it gets worse: AI could be used to de-anonymize this data, tracking specific human beings across devices and in both public and private spaces. There’s also the possibility that our data could be used against us, as AI algorithms inadvertently perpetuate existing discrimination and biases in society. That is why “the development, use, research, and development of AI must be subject to the minimum requirement of respecting, promoting, and protecting international human rights standards,” Privacy International writes.

Fear #3: Employment. We’ve all heard it before: Machines are coming for our jobs. However, Radwan told us last week that AI’s contribution to the labor market is actually positive, as it tends to create more jobs than it makes obsolete. But the rise of AI will result in job displacement, which means that different kinds of jobs will be created. For that, governments need to work on upskilling the population as a whole.

Fear #4: Ethical concerns. So, let’s say an AI machine evolves and becomes sentient, with all the emotions, desires, and existential angst we associate with human consciousness. Do we treat it as we would an average machine? Or has it become something more — something that demands certain rights? This debate has been raging since well before strong AI was even in the realm of possibility, with some insisting it should get the same ethical treatment as humans once it emerges — including protection from exploitation and suffering.