AI Adoption Is Racing Ahead but Security Is Still Struggling to Keep Up


Artificial intelligence is no longer an emerging technology; it’s a business imperative. Companies are embedding AI into customer service, product design, logistics and R&D, often at a breathtaking pace. But while companies are eagerly adopting AI, the safeguards to protect these robust systems are often left behind.

A new AI Security Benchmark Report from SandboxAQ, a company delivering solutions at the intersection of AI and quantum techniques, reveals a growing divide between AI adoption and AI-specific security readiness. Marc Manzano, general manager of the Cybersecurity Group at SandboxAQ, calls this the “confidence paradox,” where most leaders feel secure, yet few have tested those defenses.

“The biggest red flag was what we call the confidence paradox. Most security leaders feel confident about their AI defenses, but the data shows otherwise. Seventy-seven percent of leaders feel secure, but 72 percent told us that they haven’t run a single comprehensive AI risk assessment. We call it a pattern of ‘unverified confidence,'” Manzano told Newsweek.

Racing Ahead Without a Map

One of the main drivers of this gap is speed. Businesses are under immense pressure to adopt AI, whether to keep up with competitors, cut costs or unlock new capabilities. But this rapid rollout is often happening without corresponding investments in protection.

AI Adoption Soars While Security Falls Behind
Marc Manzano, general manager of the Cybersecurity Group at SandboxAQ, explained to Newsweek how “most security leaders feel confident about their AI defenses, but the data shows otherwise.”

Newsweek Illustration/Canva/Getty

“AI adoption is accelerating rapidly, driven by business pressure, not security readiness,” Manzano said. “Traditional security tools weren’t built for autonomous systems that make decisions and communicate on their own.…Many are just extending old playbooks to new systems, which doesn’t work.”

That “old playbook” problem is widespread. The report found that only 6 percent of companies have AI-specific security in place. The rest are relying on IT or security teams whose expertise was built for human-driven workflows, not machine-speed decision-making.

“Right now, only 10 percent of companies have dedicated AI security teams. In most cases, responsibility falls to existing security or IT teams, which may not have the right tools or expertise,” Manzano explained. “AI security is not just a technical challenge; it is an organizational one.”

John Heasman, chief information security officer at Proof, an identity verification network, told Newsweek that while many companies extend third-party risk management (TPRM) to AI vendors, “companies should already have a robust third-party risk management process to assess cybersecurity measures vendors have in place to protect their data. This can be extended to place additional emphasis on how the vendor performs data governance and security around AI, [for example], ‘Is our data used for model training?’ and ‘What measures are in place to protect the integrity of the AI models and data?'”

The New Attack Surface

One of the most urgent issues in AI security is the rise of nonhuman identities, the API keys, tokens and certificates that allow machines to access systems and data.

“In the past, hotels only handed out keys to people—staff and guests—who needed access to specific rooms,” Manzano said. “But today, it’s not just people checking in. There are cleaning robots, food delivery drones, automated systems, and AI agents—all of them need keys to do their jobs.…Unlike people, they don’t sign in at the front desk, and they never check out.”

The problem is that many organizations lack a complete inventory of these “keys” and are unaware of who—or what—is using them. That lack of visibility makes it difficult to enforce proper access controls or detect when credentials have been stolen or copied. And as these identities multiply, they become one of the fastest-growing, least-monitored attack surfaces.

Heasman noted that one way to reduce risk is by controlling AI system integration points. “One thing within the control of IT and security teams is how the integration of AI systems occurs, [for example] what sources of data an AI system may have access to, how it accesses that data and how end users interact with the system. Teams can greatly lower risk by adhering to tried and tested principles such as least privilege and strong logging, and bringing in companies to perform security testing, penetration testing, to find weaknesses.”

Manzano warns that attackers can exploit these machine identities with unprecedented speed.

“AI agents are designed to navigate [networks]. They can test thousands of credentials, connections and permissions in minutes to find one way in that a human might never spot.…By giving these AI agents long-lived credentials, companies are giving a super-intelligent bloodhound a master key and telling it to find every unlocked door in our entire estate.”

Inside a Company Trying to Get It Right

Albert Invent, a company building AI tools for the chemical and materials science industry, is already confronting these challenges head-on. Its leadership understands the dual risks of AI: internal use of AI tools that could expose intellectual property and customer-facing AI products that must be safeguarded against misuse.

“We have two categories: For internal tools like ChatGPT and code assistants, we use enterprise-grade providers with clear data policies and follow a ‘least exposure principle’—only sharing minimal data needed. For AI tools we build for customers, we’ve implemented strict access control layers, prompt guardrails and citation systems to prevent hallucinations while protecting chemical IP,” Nick Talken, CEO of Albert Invent, told Newsweek.

Talken also recognizes that AI-specific risks require AI-specific defenses.

“We’re transitioning from general security to AI-specific audits as our platform scales,” he explained. “We’ve implemented multiple layers—strict access controls, prompt injection detection, hardened knowledge systems—and we’re continuously adding controls like use-case/abuse-case reviews, anomaly detection and comprehensive input/output monitoring.”

Rather than assuming its defenses are airtight, the company takes a more cautious stance.

“AI security is evolving so rapidly that we stay humble about what we don’t know yet,” Talken said. “Rather than being confident, I’d say we’re committed to staying vigilant and adapting as new risks emerge.”

Why Traditional Tools Fall Short

AI systems differ fundamentally from traditional IT infrastructure. They make autonomous decisions, interact dynamically with other systems and generate outputs that can be manipulated in novel ways. This means security incidents can unfold much faster, and with less visibility, than in human-driven environments.

“AI introduces risks that move faster and are harder to see,” Manzano said. “Traditional tools rely on human behavior patterns, predictable workflows and periodic reviews.…The tools and controls built for human users via endpoint devices like laptops and smartphones don’t apply well here.”

A lack of industry standards further complicates the mismatch between old defenses and new threats. Without clear guidance, many companies are unsure how to begin securing AI, let alone maintain that security over time.

A Machine-Speed Threat Landscape

The report’s conclusion is blunt: The nature of cyberattacks is changing, and companies that cling to reactive, compliance-driven approaches will be outpaced.

“We are now in an era of machine-speed threats, and a reactive security posture focused on compliance and post-breach response is a losing strategy,” Manzano said. “Companies can’t wait to see what will happen here. This is a now problem.”

Proactive measures start with visibility, knowing exactly where AI is deployed, what data it touches and what systems it connects to. From there, organizations can close gaps, enforce tighter access controls and continuously test for vulnerabilities.

“Capabilities around visibility into AI tools varies by vendor, so right now, companies need to take what they can get, and advocate for greater visibility where it’s lacking,” Heasman said. “Like any system, it makes sense to centralize logs—[for example], in a SIEM, or data analytics platform where the security and IT team can query them—and determine a baseline for normal behavior then set up alerts when there is significant deviation from this.”

But as Manzano points out, not every company has the resources to do this alone. In those cases, partnering with outside experts or deploying specialized tools may be the only way to keep up.

“Many organizations don’t have the resources to manage this in-house. In those cases, companies will need to deploy solutions that can both uncover where a company is at risk and take steps to close those security gaps. Proactive security measures are the only way to survive in this era of machine-speed attacks,” he said.

Ultimately, the goal of the research is to shift leaders’ mindsets from misplaced confidence to genuine capability.

“Too many leaders assume their existing tools are good enough,” Manzano said. “This report shows that assumption is wrong, and business leaders need to make changes before it is too late.”



Source link

  • Related Posts

    Former NFL Pro Bowler Shares Bold Shedeur Sanders Browns Prediction

    At long last, the Cleveland Browns have named their starting quarterback. The team is rolling with longtime veteran Joe Flacco to lead the team when Week 1 of the regular…

    Former NL Cy Young Winner Predicted To Be Traded In Coming Months

    By Zach Pressnell is a Newsweek contributor based in Columbus, Ohio. His focus is MLB content. He has an extensive knowledge of professional baseball and all things that come with…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Former NFL Pro Bowler Shares Bold Shedeur Sanders Browns Prediction

    • By John
    • August 18, 2025
    • 0 views
    Former NFL Pro Bowler Shares Bold Shedeur Sanders Browns Prediction

    Former NL Cy Young Winner Predicted To Be Traded In Coming Months

    • By John
    • August 18, 2025
    • 2 views
    Former NL Cy Young Winner Predicted To Be Traded In Coming Months

    Trey Hendrickson Next Team Odds: Bengals Fielding Offers For Pass Rusher

    • By John
    • August 18, 2025
    • 2 views
    Trey Hendrickson Next Team Odds: Bengals Fielding Offers For Pass Rusher

    How Pope Leo’s Popularity Compares to Pope Francis After 100 Days

    • By John
    • August 18, 2025
    • 2 views
    How Pope Leo’s Popularity Compares to Pope Francis After 100 Days

    Ketel Marte Blockbuster? Tigers Suggested As Suitor For All-Star

    • By John
    • August 18, 2025
    • 2 views
    Ketel Marte Blockbuster? Tigers Suggested As Suitor For All-Star

    Upcoming Cubs-Brewers Series Could Decide NL Central Winner

    • By John
    • August 18, 2025
    • 4 views
    Upcoming Cubs-Brewers Series Could Decide NL Central Winner