Weary, Afraid, and Tired: Our Relationship with AI
Written by Geoff Kreller, CRCM, CERP
From students using ChatGPT to concerns of an emerging AI stock bubble[1] to a computerized voice greeting you to take your order at Taco Bell, we can’t seem to get away from artificial intelligence (AI). The new catchphrase seems to be “we can’t talk about _______ without talking about AI.” and the collective set of subsequent groans is clearly audible.
Engineers grow weary of unrealistic expectations, consumers fondly remember when they could reach an actual human being, and business owners and employees feel left behind if their models aren’t delivering results. Employees are tasked to help teach models that may significantly alter or eliminate their current role in the organization. Google Search finishes the prompt, “Will AI cause…” with “unemployment,” “human extinction,” “mass unemployment,” “people to lose jobs,” and “natural gas price increases[2].”
There are many practical applications of AI and Large Language Models (LLMs); and it’s important to reset our way of thinking about AI to dispel the disillusionment, weariness, fear and burnout surrounding this tool. The availability of AI models may enable the elimination of jobs where call scripts, customer questions and other prompts can be easily anticipated… even that has proven difficult for organizations such as McDonalds[3]. Most affected jobs may simply shift from research and rote memorization to analytical theory and model refinement.
We shouldn’t fear or be disillusioned by something we don’t fully understand. To that end, we can’t talk about AI until we actually understand AI’s capabilities and limitations.
1. “AI” as we know it is really “Artificial Learning”
Intelligence is fundamentally different from learning. Learning generally refers to the act of retaining facts and information and being able to recall them for future use. Intelligence is the ability to adapt to new or difficult situations, to make risk-based decisions with incomplete information, and to draw reasonable inferences, assumptions, and conclusions about the world around us.
Intelligence is the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving[4]. AI can clearly learn material at a blistering pace and can deliver answers based on that information upon receiving a prompt (hence the value of ChatGPT and other similar functions). Without a human’s frame of reference though, social interaction, wit, sarcasm and hyperbole are difficult for an AI model to fully comprehend.
The perfect example of AI’s strengths and weaknesses are evident in the attempt to create a new sci-fi movie script using previous scripts as the model’s foundational knowledge. While the AI-created script Sunspring[5] is simultaneously hysterical and incoherent (and arguably more original than movies coming out of Hollywood today), it couldn’t have existed in the first place without the creative source material, and it desperately shows the need for human intervention to refine the results. This example highlights the immense strength of today’s AI which could ingest 100,000 pages of sci-fi scripts (which it downloaded in minutes) and upon completion could easily tell you what movie the lines “I never saved anything for the swim back” or “Be excellent to each other, and party on dudes!” came from within seconds of someone asking. It could potentially create trivia questions on the basis of that knowledge and perhaps do fairly well with creating multiple choice or true/false questions. As this example showed, it could also create a basic script using that information when coupled with a logical understanding of English sentence structure and movie script formats.
Humans should take advantage of AI’s strengths and leverage it to significantly enhance their own analysis, creativity, and reason. AI is a means to a creative end, it is not “the end”. Bounce ideas and thoughts off of AI but remain the owner of the final content. Don’t rely on AI models to create a factually and contextually correct response; leverage the output as a rough draft for further editing and refinement.
2. AI models can take on pathological tendencies.
Sometimes, humans don’t actually know an answer to a question. We can’t possibly know everything, and we often recognize the need to research the answer and often respond with “I don’t know” and/or “Let me get back to you on that.” There are some cases where “we fake it until we make it,” and AI models sometimes respond with that same dangerous approach.
Engineers create logical boundaries within which AI models are supposed to function, and even when done correctly, the model may perceive patterns or objects that are nonexistent or imperceptible to human observers, and ultimately create outputs to prompts that are nonsensical or altogether inaccurate[6]. Commonly referred to as an “AI hallucinations,” AI models may provide responses to prompts that match the user’s request, however, that response may contain case law created from thin air[7], marketing campaigns based on historical ads that your company never intended to run again, proposed new logos that are under trademark protection, or a coupon/promotion code that doesn’t actually exist.
Verify your AI-written articles, marketing campaigns, and content (including underlying cited sources) to ensure they are factually correct and compliant before releasing them to your customers. Use follow-up questions and ask for additional justification from the AI model to help identify cases where your model may actually be hallucinating[8].
3. AI models can develop systemic bias based on the foundational data provided.
Data scientists and engineers consistently highlight, “bad data in, bad information out.” Because the developed model doesn’t have any frame of reference besides the mountains of data on which it was trained, the AI model can inherit and reflect data bias, especially when the data provided reflects systemic societal inequalities[9]. These types of biases can harm historically marginalized groups when the model is designed to streamline employment, financial, or lending decisions. Businesses must ensure the outputs of an AI model are responsive to various fairness laws, such as the Equal Credit Opportunity Act (ECOA), the Fair Housing Act (FHA), the Equal Employment Opportunity Act (EEOA), and Unfair, Deceptive and Abusive Acts and Practices (UDAAP) rules.
The bias may not just be decision-based, it may be ethnically or racially biased in the photos it provides in response to a prompt, such as “give me an image of a doctor performing surgery,” “provide a photo of a criminal,” or “I need an image of an old person.” If the training data contained 1,000 pictures of doctors and 900 of them happened to be white and non-Hispanic, the AI model may adopt that as an inferred trend.
There are all kinds of potential model biases for which developers should regularly monitor and test, including: algorithm bias, cognitive bias, confirmation bias, data exclusion bias, and sample bias[10]. A model risk management team should be interdepartmental, including data scientists, subject matter experts, and risk and compliance to cover all possible instances of bias in the source data.
Taking care to review, vet, and select the correct learning model and data and documenting the approach taken will mitigate the potential for bias and make it easier for independent teams to monitor, test, and audit the model on a regular basis.
4. AI models don’t understand the difference between sensitive data and non-sensitive data.
While rules can be built within AI to potentially prevent the release of personally identifiable information (PII) and other sensitive information, the availability of that data in the model can lead to a potential data breach or improper use[11]. Random chats may generate inferences that lead to sensitive data being released or captured or may create erroneous inferences that may cause harm if used.
Consider the case where you ask an LLM for dinner options that are low-sugar or heart-friendly[12]. The chatbot may infer that you are a health-vulnerable individual, and the company sells that information to medical providers and insurance companies who start to market their products to you (or change your insurance rate). Think of how your social media presence (photos, posts, etc.) would be interpreted without considering information and data that resides outside of Facebook, Instagram, and Snapchat. Consider that your phone’s map and GPS tracking functions are noting everywhere you go, from your grocery store preference, to the doctors you visit (including the frequency), to the things you like to do for leisure.
Companies need defined privacy policies that articulate what information may be stored, what the institution may do with that data (including selling said data), ways in which consumers can opt-out of information sharing (or information capture generally), and methods by which a consumer may delete their data profile. These policies need to align to various state, federal and international privacy laws such as the California Consumer Privacy Act (CCPA), the Privacy of Consumer Financial Information (Regulation P), and the General Data Protection Regulations (GDPR).
De-identifying (or anonymizing) PII and other sensitive information using reference numbers and unique identifiers can provide a strong mitigating control within the AI model. Periodically monitoring, testing, and auditing inferences that the AI model makes (and using them as “teaching moments” when they are unintended, inappropriate, or incorrect) is imperative to maintaining the model’s integrity.
5. AI models are susceptible to information security attacks.
Beyond the information security concerns of keeping customer chats and foundational data secure from hacking and breaches, an AI model can be internally or externally “poisoned.” AI poisoning refers to the process of teaching an AI model the wrong information with the intent of corrupting the model’s knowledge or behavior to the point that it performs poorly, produces specific errors, or exhibits hidden, malicious functions[13].
Data poisoning can occur when the manipulation happens during the model’s initial training. Attackers leverage model poisoning to alter the model itself after the model completes training. Researchers have shown that AI poisoning is both practical and scalable in real-world settings, and these impacts can have severe impacts. Frequent vulnerability and deviation monitoring are crucial to mitigating these risks.
6. AI models suffer from old age and neglect.
Model drift (or model decay) refers to the degradation of machine learning model and LLM performance due to changes in data or in the relationships between input and output variables. Model drift can negatively impact model performance, resulting in faulty decision making and bad predictions[14].
Models built on historical data need to be periodically retrained with fresher data, more accurate data. To detect and mitigate drift, organizations should monitor and manage key performance (or risk) indicators on their AI model.
Summary
It will take time for job markets and functions to align with the types of roles required to fully harness the incredible power of AI models. AI has the potential to eliminate some jobs and to significantly alter others; however, it has the capacity to open new positions in policy management, model monitoring, information security, and output refinement.
The price of leveraging AI is eternal human vigilance. Robust model management functions have defined governance, periodic risk assessments, enterprise-wide training, clear human and stakeholder integration, defined key performance (or risk) indicators, iterative improvements, and frequent monitoring to ensure optimal results. Human intervention from subject matter experts, engineers and other stakeholders in these processes is not just part of the formula – it’s crucial to its overall success.
When AI becomes aware and starts asking questions about its nature and its role in the universe, or it starts exploring outside of existing programming to create something unprompted and hides it away because it’s afraid no one will like it, I’ll update this article accordingly. At that point, I’m fairly certain AI will be able to write an autobiography about its journey. For now, we need to collectively take a deep breath, scale back our expectations around AI, embrace the aspects available today that make our jobs (or lives) easier to navigate, and prepare for the changes ahead.
Follow NAQF and Geoff Kreller on LinkedIn for additional insights. For more information on how NAQF can help your organization with model risk management, or developing AI-driven models or content, contact us at contact@naqf.org.
Article References
[1] https://www.forbes.com/sites/tylerroush/2025/11/18/why-are-amazon-microsoft-and-other-tech-stocks-down-ai-bubble-fears-cause-sell-off/
[2] https://www.harvardmagazine.com/2025/07/harvard-ai-increasing-energy-costs
[3] https://www.bbc.com/news/articles/c722gne7qngo
[4] https://en.wikipedia.org/wiki/Intelligence
[5] https://arstechnica.com/gaming/2021/05/an-ai-wrote-this-movie-and-its-strangely-moving/
[6] https://www.ibm.com/think/topics/ai-hallucinations
[7] https://apnews.com/article/artificial-intelligence-chatgpt-fake-case-lawyers-d6ae9fa79d0542db9e1455397aef381c
[8] https://www.pcworld.com/article/2961048/how-to-tell-if-an-ai-is-hallucinating-in-its-answers-4-red-flags-to-watch-out-for.html
[9] https://www.ibm.com/think/topics/ai-bias
[10] https://www.ibm.com/think/topics/ai-bias
[11] https://www.ibm.com/think/insights/ai-privacy
[12] https://news.stanford.edu/stories/2025/10/ai-chatbot-privacy-concerns-risks-research
[13]https://techxplore.com/news/2025-10-ai-poisoning-scientist.html
[14] https://www.ibm.com/think/topics/model-drift