Exploring AI: Concepts and Risks

by J.M. Pressley

A Machine Learning algorithm walks into a bar. The bartender asks, “What will you have?” The algorithm responds, “What’s everyone else having?”

The term artificial intelligence (AI) dates back to 1955, coined by John McCarthy for a summer workshop at Dartmouth. For the next fifty years, the average person held a fairly limited concept of AI as either science fiction or computers trying to beat humans in board games. Most people felt something change when Gary Kasparov lost a 1997 chess match to IBM?s Deep Blue computer. Was it an anomaly or a harbinger? Largely, we couldn?t tell.

Then came social media. And as anyone with an active account from the last decade can attest, we were suddenly swimming in AI algorithms. AI by 2020 would alternately promise to make nearly everyone’s work easier or put nearly everyone out of work. It has a lot of money behind it trying to make a lot more money.

However—much like “cloud computing” a couple of decades earlier—“artificial intelligence” means different things to different people. Some view it as advanced data analytics. Some view it as a personal assistant. And some view it as an existential threat. It could be all that and more, all at once. But what is AI, exactly?

Types of Artificial Intelligence

AI has multiple definitions. Nearly all describe a range of sophistication in how AI functions and/or replicates reasoning. The broadest fundamental breakdown falls into three categories: Narrow, General, and Super. At present, we are technologically in the Narrow phase of AI development.

  • Narrow AI, as the name suggests, performs a specific task or set of tasks. A YouTube or Netflix feed algorithm is a good example of this. The AI can evaluate your past viewing history and make future suggestions based on similar content. Predictive search help falls into the same category—once you start typing a word, the AI makes suggestions to complete the search phrase based on its record of previous searches.
  • General AI equates to human-like reasoning, using past experience to respond to new circumstances. In this case, machine learning determines new courses of action and even plans ahead for events for which the AI hasn’t been programmed. This is still in the hypothetical stage—and may be for some time to come.
  • Super AI, in theory, would exceed human intelligence and capability. It’s hard to describe what this would actually entail without getting into science fiction. The most salient feature would be autonomous self-improvement. Scientists aren’t even clear if this level of AI is achievable.

There are several terms relating to AI that are also helpful to know:

  • Algorithm—a sequence of programmed instructions and rules used by AI to perform a function
  • Deep Learning—multilayered neural networks inspired by the human brain that use large datasets and complex algorithms for processing unstructured data
  • Emergent Behavior—an AI displaying unpredictable or unintended capabilities
  • Generative AI—an AI that is trained on large content datasets to mimic or create similar content
  • Hallucination—incorrect or false data returned as factual by an AI output
  • Large Language Model (LLM)—an AI trained on large volumes of text to understand and replicate language
  • Limited Memory—an AI system that stores real-time events in a database to use in improving predictions
  • Machine Learning (ML)—a field of AI study that focuses on ways in which computers can learn from their own interaction with data and improve their algorithms over time
  • Natural Language Processing (NLP)—AI that enables computers to interpret human language, either spoken or written
  • Prompt—an input from a human user to an AI system to return a result
  • Supervised Learning—a subset of machine learning in which the AI is trained using only classified, structured data as provided by its programmers
  • Unsupervised Learning—a subset of machine learning in which an AI uses unstructured data to discover patterns without explicit human intervention

Applications

Business drives a great deal of AI adoption. With all technologies, business prioritizes efficiency, productivity, and lowered operational costs. AI checks every box in that regard. It enables businesses to escalate their e-commerce, customer service, marketing, and supply chain capabilities. AI also provides organizational officers support in making the decisions that impact their businesses.

Medicine comprises another burgeoning arena for AI. Doctors can use AI to help with complicated diagnoses and determining effective treatments. Diagnosticians work with tools that analyze medical imaging and samples to detect early-stage development of conditions. Pharmaceutical companies utilize AI to create new drugs and tailor patient therapies.

Security and risk management are heavy users of AI. Financial systems use AI to help detect transactions and patterns associated with fraud. Law enforcement uses AI not only to investigate crimes but to attempt to prevent them in the first place. Facial and voice recognition aid in surveillance. Traffic light cameras can automatically issue citations and help track individual vehicles on the road.

The list of applications for AI in everyday life grows daily. AI has proven valuable and even crucial in certain industries. Companies now routinely market AI as a panacea for their customers. AI can indeed be a powerful tool, but it comes with limitations.

Limitations

AI, while great at pattern recognition, lacks the refinement (some may say common sense) to understand what a given pattern means. It won’t interpret anything outside the parameters of what it’s been explicitly told to do. Subtlety or nuance don’t exist with AI, and it remains to be seen if they ever will. Any new situation, any new set of circumstances, will be incomprehensible to today’s AI.

That leads to another pitfall of AI. Computer science has long appreciated the concept of “garbage in, garbage out.” Ai is no different. An AI output is only as good as the prompt given by the user and the data on which it’s been trained. Any missing, incomplete, or incorrect data will result at best in flawed output. AI is also susceptible to flaws in the algorithms it uses. For instance, a job applicant tracking AI trained using only the resumes from graduates of one university might start to exclude graduates of all other schools.

AI also lacks true creativity. An AI won’t write a mystery novel; it will generate something like a mystery novel based on every author it’s been trained on. There’s no real creative process involved, no conscious intent to create, and unlike a human author, AI can’t conceptualize something that’s never been done before.

As powerful as it is, AI remains a useful tool in human hands rather than a replacement for human thinking. And AI development brings with it some natural concerns.

Concerns

AI has unique potential as a breeding ground for unintended consequences. Problematic areas include how data is collected and used, what effects AI has on society, and how AI is regulated.

  • Bias—Downstream effects happen when biases arise in the training dataset, the algorithms, or the output results of AI. This may perpetuate and even exacerbate biases. For instance, historical arrest data can influence AI to reinforce bias in policing certain neighborhoods while ignoring newer crime trends in others.
  • Environmental impacts—The computing power necessary for AI requires tremendous amounts of both electricity to run the servers and fresh water to cool them down. That puts AI generation in competition with human needs for those resources. Mining the rare earth minerals necessary for advanced electronics damages the environment. Disposal of these items generates new non-recyclable wastes and pollution.
  • Employment impacts—Business automation historically leads to job losses. Recent studies suggest that AI will cost more jobs than it creates over the next 5-10 years. If AI outpaces our ability to absorb those job losses into our society, unemployment will soar.
  • Misconduct—AI already aids in generating disinformation and fraud. Social media provides ample opportunity for AI bot accounts to spread dissension. AI can create misleading or outright fake content. With the growing ability to replicate voices, AI represents a new threat level of social engineering scams.
  • Privacy—It’s not just that AI has been trained on personal and sensitive data in many cases. It’s also how that data is subsequently used by AI. Has the subject consented? When does AI become illegal electronic surveillance? And does AI unintentionally divulge any personal information in its operation?
  • Regulation—Proponents of AI have argued against regulation on the grounds that it remains a developing technology. Regulation, in their view, would stifle innovation. However, the rapid evolvement of the technology in the last decade has outrun any public policy. If future AI impacts as many elements of society as promised, we’re already ill equipped to mitigate any side effects.

In conclusion, AI offers many benefits in data analysis, automation, and other useful functions. Computer scientists have made great strides in AI capabilities and implementations. We’ve reached a crossroads. If AI remains in service of humans, well-regulated and dedicated in purpose, it can provide vital assistance across a broad swath of industries. If, however, AI grows ever more unfettered by ethics or supervision, it may yet evolve into a historical lapse of human judgment.