Why CIOs Are Cautious About AI

by Robert Encarnacao, on Mar 16, 2025 4:00:00 AM

Artificial intelligence is the hottest buzzword in tech, but not everyone is jumping on the bandwagon with reckless abandon. In fact, many CIOs, CTOs, developers, and security teams are pumping the brakes on AI-powered software solutions. (No, it’s not just paranoia or Skynet-induced nightmares. There are very good reasons for caution.)

I will explore in this post why companies are increasingly wary of AI tools, the security pitfalls of generative AI, - especially when sensitive data gets “stuck” in what we call an AI tarpit  and how a deterministic AI approach can offer a safer, more reliable path forward for software migration. We’ll also share how Growth Acceleration Partners (GAP) uses deterministic AI and AST extensions to ensure precision, security, and confidentiality in code migration. 

Crypto AI Agents: The First-Class Citizens of Onchain Economies – Variant

Photo Credit: Variant Fund

The New AI Anxiety: Why the C-Suite Is Pumping the Brakes

It’s no secret that AI holds enormous promise. In a recent survey, 84% of enterprise CIOs said AI will be as transformative to business as the internet was. (CIO AI Trends, Salesforce)

So why have only 11% fully implemented AI?

The answer lies in a host of technical and organizational challenges led chiefly by security worries and shaky data foundations. In fact, 67% of CIOs admit they’re taking a more cautious, calculated approach to AI rollouts than with prior technologies, largely because of these security and data concerns. The message is clear: leaders see the potential of AI, but they’re not blind to its risks.

Key reasons corporate leaders tap the brakes on AI include:

  • Security Risks: Handing over sensitive data or mission-critical code to an AI can feel like giving the keys of your kingdom to a mysterious and potentially nefarious stranger. If AI systems mishandle data or get breached, the fallout could be severe. There’s real precedent for this fear: employees using public AI tools have inadvertently leaked confidential information. For example, Samsung engineers once accidentally leaked proprietary source code by using an AI tool for debugging and summarization. This incident underscores the risk that any data fed into a public generative AI might later be incorporated into its training set or exposed to other users.

  • Unpredictability of Generative AI: Today’s popular AI models (like large language models) are incredibly powerful — and incredibly unpredictable. They don’t follow strict rules; they generate answers probabilistically. That means you might ask the same question twice and get two different answers, some of which may be flat-out wrong or nonsensical. In terms of code migration, it might lead to inconsistent coding patterns. Developers have learned the hard way that even a minor misinterpretation can introduce bugs, making the migration process less reliable and adding extra debugging burdens.

  • Compliance and Legal Concerns: Even if an AI tool behaves itself technically, companies must consider regulatory and compliance factors. Many industries have strict rules about data privacy, governance, and software processes. Introducing AI can throw a wild card into compliance. What if the AI’s training data included copyrighted code? What if it produces output that violates licensing? What about data sovereignty if your data is processed on a cloud service in another country? These aren’t theoretical questions — they’re real concerns that can lead to hefty fines, lawsuits, or serious reputational damage.

Given these challenges, it’s easy to understand why cautious decision-makers are wary. With only 9% of organizations feeling prepared to manage generative AI risks and a mere 5% ready to handle an unpredictable “unknown” AI incident, the default move is to proceed carefully. Exciting as AI may be, nobody in the C-suite wants to be the one saying “We moved fast and broke things” when those “things” might include customer data or critical business logic.

 

Third-Party AI Systems: On-Prem or Cloud, What Could Go Wrong? (Plenty.)

Another layer of caution comes from the use of third-party AI systems. Whether you’re adopting a cloud-based AI service or installing an AI platform on-premises, you’re essentially inviting an external “intelligent” vendor into your ecosystem. And as every security professional knows, third-party software can be a double-edged sword.

Cloud-Based AI

Using an AI provider’s cloud service means your data or code will travel outside your organization to reside on someone else’s servers. This raises immediate data security and privacy questions: Who can access that data? How is it stored? Is it encrypted? Could it be intercepted or leaked? We’ve already seen how public cloud AI services have led to inadvertent exposures of proprietary information. Even with vendor assurances, the fact remains that your sensitive data could be part of a breach if the provider’s systems are compromised.

On-Premises AI

You might think on-premises AI systems are a safe bet since your data stays within your controlled environment. However, they bring their own challenges. Relying on third-party AI software internally can still introduce vulnerabilities. You must maintain rigorous update schedules, patch management, and configuration oversight. Plus, if the on-premises AI system logs or caches data internally, there’s still a risk of that data being inadvertently exposed, especially if internal security controls aren’t airtight.

In both scenarios, the core issue is visibility and control. With third-party AI, you often deal with a “black box” — you input data, magic happens, output comes out. If something goes awry inside that black box, debugging and auditing become very challenging. Security teams demand robust controls, audit trails, and compliance assurances before trusting these systems with sensitive data.

 

Beware the “AI Tarpit”: When Sensitive Data Gets Stuck

In our discussion of generative AI, the term “AI tarpit” deserves special attention — and not for the inefficiency-related reasons some might assume. Here, we’re focusing on a security-centric definition.

An AI tarpit in this context refers to a situation where confidential user input or sensitive data becomes inadvertently “stuck” in an AI system. This can happen when:

  • Data Retention and Caching: AI models, especially generative ones, might retain or cache inputs during processing. Without stringent data retention policies, this sensitive data could later be retrieved, even when it shouldn’t be.
  • Inadvertent Exposure: When prompted in just the right (or wrong) way, these models might inadvertently expose or “regurgitate” previously seen data, turning confidential input into a hidden vulnerability.
  • Lack of Explicit Policies: Unlike traditional software with clear data retention policies, many AI models operate as black boxes where logging, caching, or data reuse policies are not transparent or are inadequately enforced.

In essence, the AI tarpit here isn’t just about performance inefficiencies; it’s a stark reminder that without the right controls, sensitive information could be trapped within an AI system, only to resurface at an inopportune moment. This represents a serious security risk, - one that companies must guard against when considering AI solutions for mission-critical tasks like software migration. Check out this whitepaper written by Stanford University on confidential data in AI

 

Generative AI vs. Deterministic AI: The Safe Path Through the Minefield

All these concerns lead to an important distinction in the AI world: generative AI versus deterministic AI. Understanding the difference is crucial for making smart decisions about AI-powered solutions, especially in software development and migration.

  • Generative AI (Probabilistic AI): These models, including popular large language models, generate outputs based on probability. This can lead to creative results but also introduces variability. The same input might yield different outputs at different times, and along with that variability come risks like hallucinations or unexpected behaviors — and, as discussed, the potential to inadvertently “trap” sensitive data.

  • Deterministic AI: In contrast, deterministic AI is engineered so that given the same input, it will always produce the same output. It follows predefined, rule-based transformations. This means no randomness, no unexpected twists, and no risk of exposing hidden data through unpredictable caching or retention behaviors. For software migration, this consistency is crucial. Deterministic AI ensures that the migrated code is a precise transformation of the original, adhering strictly to rules that guarantee both functional integrity and security.

In the context of software migration, these differences are more than theoretical. When you migrate code, - say, from an old programming language to a modern one, accuracy and completeness are paramount. A deterministic approach ensures that your legacy system’s behavior is faithfully replicated, without the side effects or hidden vulnerabilities that might creep in with a generative model.

 

GAP’s Secret Sauce: Deterministic AI + AST for Safe, Predictable Code Migration

So how does Growth Acceleration Partners fit into this picture? At GAP, we understand the genuine concerns around AI-driven solutions, and we’ve designed our approach long before the generative AI systems of today. Our migration process is built on deterministic AI principles augmented with AST (Abstract Syntax Tree), rather than relying on unpredictable generative models.

AST 101

An AST is a structured representation of your source code — a tree that breaks the code down into its constituent parts, much like parsing the grammar of a sentence. This allows our deterministic AI to understand not just the text of your code, but its underlying structure and intent. By leveraging ASTs, our solution can perform precise, rule-based transformations that maintain the original logic and semantics of your software.

The Benefits of Our Approach

  • Precision and Predictability: Every code change is executed according to a strict set of transformation rules. There’s no guesswork. If the rule says “convert pattern X to pattern Y,” that’s exactly what happens every time. This guarantees that your migrated code is a faithful representation of your original, without the surprises generative models might introduce.

  • Security & Confidentiality: Our deterministic AI solution operates within controlled environments. It doesn’t require sending your proprietary code off to a public AI service. By working on your premises or within GAP’s secure, dedicated framework, we minimize the risk of sensitive data getting "stuck" in an AI tarpit. There’s no inadvertent caching or retention of confidential input that could later become a vulnerability.

  • Compliance and Control: The rules-driven nature of our approach means we can easily enforce compliance requirements. Whether it’s ensuring code adheres to specific security policies or data privacy regulations, our deterministic system provides full traceability. Every transformation is auditable, making it much easier to validate compliance compared to black-box generative AI.

  • Flexibility without Chaos: Our AST approach is essential to our migration products and have been developed over years handling real-world legacy systems for the most security-sensitive applications. It allows our tool to adapt to even the most idiosyncratic code constructs, ensuring a smooth migration without compromising on precision or security.

Simply put, GAP’s approach marries the efficiency of automation with the assurance of deterministic, rule-based code transformation. Our deterministic AI is purpose-built to modernize your legacy applications safely and reliably, ensuring that every migrated line of code behaves as expected — without the hidden risks of generative AI.

 

Embrace AI (The Right Kind) for Safe and Predictable Software Migration

AI in software development doesn’t have to be a leap of faith. With the right approach, companies can enjoy the benefits of AI without falling into pitfalls like the AI tarpit, where sensitive data or source code becomes an unforeseen vulnerability. The key is to choose solutions that prioritize predictability, security, and accuracy.

Growth Acceleration Partners is proud to offer a path forward that directly addresses these concerns. By leveraging deterministic AI combined with advanced AST-based transformations, we deliver software migrations that are not only efficient and precise but also secure and compliant. Our solution is designed to eliminate the unpredictable risks of generative AI — and to ensure that your confidential data never gets “stuck” in a system where it shouldn’t be.

 

Ready to modernize your legacy systems without risking your sensitive data? 

Reach out to Growth Acceleration Partners today to discover how our secure, deterministic AI-driven migration solutions can help you confidently evolve your software for the future. Don’t let fear of AI vulnerabilities hold you back. Embrace the right kind of AI, - one that’s built to protect and deliver, every step of the way.

Contact Us

Topics:application modernizationapplication migrationAIgenerative AI

Comments

Subscribe to Mobilize.Net Blog

More...
FREE CODE ASSESSMENT TOOL