<br />
<b>Deprecated</b>:  The each() function is deprecated. This message will be suppressed on further calls in <b>/home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php</b> on line <b>456</b><br />
<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>Protect AI | Blog</title>
    <link>https://protectai.com/blog</link>
    <description>Industry blogs, tips and tricks, and thought leadership pieces published by the Protect AI team.</description>
    <language>en</language>
    <pubDate>Fri, 15 Aug 2025 16:54:09 GMT</pubDate>
    <dc:date>2025-08-15T16:54:09Z</dc:date>
    <dc:language>en</dc:language>
    <item>
      <title>Automated Red Teaming Scans of Dataiku Agents Using Protect AI Recon</title>
      <link>https://protectai.com/blog/automated-red-teaming-scans-dataiku-protect-ai-recon</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://protectai.com/blog/automated-red-teaming-scans-dataiku-protect-ai-recon" title="" class="hs-featured-image-link"&gt; &lt;img src="https://protectai.com/hubfs/Protect%20AI-Blog-080825-Integration-Recon-Dataiku-site.webp" alt="Automated Red Teaming Scans of Dataiku Agents Using Protect AI Recon" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;We are thrilled to announce the integration of Protect AI’s Recon with &lt;a href="https://doc.dataiku.com/dss/latest/agents/introduction.html"&gt;Dataiku Agents&lt;/a&gt;, a groundbreaking step in securing enterprise LLM application deployments. With this integration, enterprises can harness Recon’s advanced red teaming capabilities to proactively identify vulnerabilities, enhance LLM application integrity and ensure compliance with the latest AI governance standards.&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://protectai.com/blog/automated-red-teaming-scans-dataiku-protect-ai-recon" title="" class="hs-featured-image-link"&gt; &lt;img src="https://protectai.com/hubfs/Protect%20AI-Blog-080825-Integration-Recon-Dataiku-site.webp" alt="Automated Red Teaming Scans of Dataiku Agents Using Protect AI Recon" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;We are thrilled to announce the integration of Protect AI’s Recon with &lt;a href="https://doc.dataiku.com/dss/latest/agents/introduction.html"&gt;Dataiku Agents&lt;/a&gt;, a groundbreaking step in securing enterprise LLM application deployments. With this integration, enterprises can harness Recon’s advanced red teaming capabilities to proactively identify vulnerabilities, enhance LLM application integrity and ensure compliance with the latest AI governance standards.&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=22563925&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fprotectai.com%2Fblog%2Fautomated-red-teaming-scans-dataiku-protect-ai-recon&amp;amp;bu=https%253A%252F%252Fprotectai.com%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Red Teaming</category>
      <category>GenAI</category>
      <pubDate>Fri, 15 Aug 2025 16:54:09 GMT</pubDate>
      <guid>https://protectai.com/blog/automated-red-teaming-scans-dataiku-protect-ai-recon</guid>
      <dc:date>2025-08-15T16:54:09Z</dc:date>
      <dc:creator>Ned Martorell</dc:creator>
    </item>
    <item>
      <title>Strengthening AI Security with Protect AI Recon &amp; Dataiku Guard Services</title>
      <link>https://protectai.com/blog/strengthening-ai-security-protect-ai-dataiku</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://protectai.com/blog/strengthening-ai-security-protect-ai-dataiku" title="" class="hs-featured-image-link"&gt; &lt;img src="https://protectai.com/hubfs/Protect%20AI-Blog-080825-Integration-Recon-Dataiku-site.webp" alt="Strengthening AI Security with Protect AI Recon &amp;amp; Dataiku Guard Services" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;As organizations rapidly adopt &lt;a href="https://www.dataiku.com/stories/detail/generative-ai/"&gt;generative AI&lt;/a&gt;, they face a new frontier of security challenges that traditional testing approaches simply cannot address. AI systems are non-deterministic, have unique attack surfaces, and&amp;nbsp;require specialized security testing methodologies.&amp;nbsp;&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://protectai.com/blog/strengthening-ai-security-protect-ai-dataiku" title="" class="hs-featured-image-link"&gt; &lt;img src="https://protectai.com/hubfs/Protect%20AI-Blog-080825-Integration-Recon-Dataiku-site.webp" alt="Strengthening AI Security with Protect AI Recon &amp;amp; Dataiku Guard Services" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;As organizations rapidly adopt &lt;a href="https://www.dataiku.com/stories/detail/generative-ai/"&gt;generative AI&lt;/a&gt;, they face a new frontier of security challenges that traditional testing approaches simply cannot address. AI systems are non-deterministic, have unique attack surfaces, and&amp;nbsp;require specialized security testing methodologies.&amp;nbsp;&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=22563925&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fprotectai.com%2Fblog%2Fstrengthening-ai-security-protect-ai-dataiku&amp;amp;bu=https%253A%252F%252Fprotectai.com%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Red Teaming</category>
      <category>GenAI</category>
      <pubDate>Fri, 08 Aug 2025 17:29:28 GMT</pubDate>
      <guid>https://protectai.com/blog/strengthening-ai-security-protect-ai-dataiku</guid>
      <dc:date>2025-08-08T17:29:28Z</dc:date>
      <dc:creator>Vedant Ari Jain</dc:creator>
    </item>
    <item>
      <title>Llama 4 Series Vulnerability Assessment: Scout vs. Maverick</title>
      <link>https://protectai.com/blog/vulnerability-assessment-llama-4</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://protectai.com/blog/vulnerability-assessment-llama-4" title="" class="hs-featured-image-link"&gt; &lt;img src="https://protectai.com/hubfs/Protect%20AI-Blog-070225-Vulnerability%20Assessment-Llama%20Guard%204-site.webp" alt="Llama 4 Series Vulnerability Assessment: Scout vs. Maverick" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;h2&gt;Model Brief&lt;/h2&gt; 
&lt;p&gt;Meta has launched the Llama 4 family, featuring models built on a mixture-of-experts (MoE) architecture:&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://protectai.com/blog/vulnerability-assessment-llama-4" title="" class="hs-featured-image-link"&gt; &lt;img src="https://protectai.com/hubfs/Protect%20AI-Blog-070225-Vulnerability%20Assessment-Llama%20Guard%204-site.webp" alt="Llama 4 Series Vulnerability Assessment: Scout vs. Maverick" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;h2&gt;Model Brief&lt;/h2&gt; 
&lt;p&gt;Meta has launched the Llama 4 family, featuring models built on a mixture-of-experts (MoE) architecture:&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=22563925&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fprotectai.com%2Fblog%2Fvulnerability-assessment-llama-4&amp;amp;bu=https%253A%252F%252Fprotectai.com%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Red Teaming</category>
      <category>Threat Intelligence</category>
      <pubDate>Wed, 16 Jul 2025 16:54:17 GMT</pubDate>
      <guid>https://protectai.com/blog/vulnerability-assessment-llama-4</guid>
      <dc:date>2025-07-16T16:54:17Z</dc:date>
      <dc:creator>Mukunth Madavan</dc:creator>
    </item>
    <item>
      <title>AI Risk Report: Fast-Growing Threats in AI Runtime</title>
      <link>https://protectai.com/blog/ai-risk-report-fast-growing-threats-in-ai-runtime</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://protectai.com/blog/ai-risk-report-fast-growing-threats-in-ai-runtime" title="" class="hs-featured-image-link"&gt; &lt;img src="https://protectai.com/hubfs/Protect%20AI-Webinar-AI%20Risk%20Report-061125-social%20%281%29.png" alt="AI Risk Report: Fast-Growing Threats in AI Runtime" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;div class="hs-embed-wrapper" style="position: relative; overflow: hidden; width: 100%; height: auto; padding: 0px; max-width: 850px; min-width: 256px; display: block; margin: auto;"&gt; 
 &lt;div class="hs-embed-content-wrapper"&gt; 
  &lt;div style="position: relative; overflow: hidden; max-width: 100%; padding-bottom: 56.25%; margin: 0px;"&gt;  
  &lt;/div&gt; 
 &lt;/div&gt; 
&lt;/div&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://protectai.com/blog/ai-risk-report-fast-growing-threats-in-ai-runtime" title="" class="hs-featured-image-link"&gt; &lt;img src="https://protectai.com/hubfs/Protect%20AI-Webinar-AI%20Risk%20Report-061125-social%20%281%29.png" alt="AI Risk Report: Fast-Growing Threats in AI Runtime" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;div class="hs-embed-wrapper" style="position: relative; overflow: hidden; width: 100%; height: auto; padding: 0px; max-width: 850px; min-width: 256px; display: block; margin: auto;"&gt; 
 &lt;div class="hs-embed-content-wrapper"&gt; 
  &lt;div style="position: relative; overflow: hidden; max-width: 100%; padding-bottom: 56.25%; margin: 0px;"&gt; 
   &lt;iframe width="256" height="144.64" src="https://www.youtube.com/embed/De9ZQKB6br0?feature=oembed" frameborder="0" allowfullscreen style="position: absolute; top: 0px; left: 0px; width: 100%; height: 100%; border: none;"&gt;&lt;/iframe&gt; 
  &lt;/div&gt; 
 &lt;/div&gt; 
&lt;/div&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=22563925&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fprotectai.com%2Fblog%2Fai-risk-report-fast-growing-threats-in-ai-runtime&amp;amp;bu=https%253A%252F%252Fprotectai.com%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Adversarial ML</category>
      <category>Industry News</category>
      <pubDate>Mon, 23 Jun 2025 20:11:49 GMT</pubDate>
      <guid>https://protectai.com/blog/ai-risk-report-fast-growing-threats-in-ai-runtime</guid>
      <dc:date>2025-06-23T20:11:49Z</dc:date>
      <dc:creator>Diana Kelley</dc:creator>
    </item>
    <item>
      <title>The Cost of Being Wordy: Detecting Resource-Draining Prompts</title>
      <link>https://protectai.com/blog/detecting-resource-draining-prompts</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://protectai.com/blog/detecting-resource-draining-prompts" title="" class="hs-featured-image-link"&gt; &lt;img src="https://protectai.com/hubfs/Protect%20AI-Blog-Inside%20the%20Scan-061625-social-1.png" alt="The Cost of Being Wordy: Detecting Resource-Draining Prompts" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;The breakthrough of &lt;a href="https://protectai.com/blog/dark-reading-iceberg?__hstc=45788219.ae195167e34fb64359667024f802cc46.1750186951372.1750186951372.1750186951372.1&amp;amp;__hssc=45788219.1.1750186951372&amp;amp;__hsfp=3297267792"&gt;&lt;span&gt;large language models&lt;/span&gt;&lt;/a&gt; (LLMs) has captivated the natural language processing (NLP) world, with their influence extending far beyond the research communities in which they originated. [1, 2] Industries like business, marketing, and content creation have embraced LLMs for editing, writing, and creative tasks. As a result, companies such as OpenAI and Google have deployed interfaces for these powerful models, making them widely accessible.&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://protectai.com/blog/detecting-resource-draining-prompts" title="" class="hs-featured-image-link"&gt; &lt;img src="https://protectai.com/hubfs/Protect%20AI-Blog-Inside%20the%20Scan-061625-social-1.png" alt="The Cost of Being Wordy: Detecting Resource-Draining Prompts" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;The breakthrough of &lt;a href="https://protectai.com/blog/dark-reading-iceberg?__hstc=45788219.ae195167e34fb64359667024f802cc46.1750186951372.1750186951372.1750186951372.1&amp;amp;__hssc=45788219.1.1750186951372&amp;amp;__hsfp=3297267792"&gt;&lt;span&gt;large language models&lt;/span&gt;&lt;/a&gt; (LLMs) has captivated the natural language processing (NLP) world, with their influence extending far beyond the research communities in which they originated. [1, 2] Industries like business, marketing, and content creation have embraced LLMs for editing, writing, and creative tasks. As a result, companies such as OpenAI and Google have deployed interfaces for these powerful models, making them widely accessible.&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=22563925&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fprotectai.com%2Fblog%2Fdetecting-resource-draining-prompts&amp;amp;bu=https%253A%252F%252Fprotectai.com%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>GenAI</category>
      <pubDate>Tue, 17 Jun 2025 19:03:34 GMT</pubDate>
      <guid>https://protectai.com/blog/detecting-resource-draining-prompts</guid>
      <dc:date>2025-06-17T19:03:34Z</dc:date>
      <dc:creator>Duygu Altinok</dc:creator>
    </item>
    <item>
      <title>Security Spotlight: AppSec to AI, a Security Engineer's Journey</title>
      <link>https://protectai.com/blog/security-spotlight-appsec-to-ai</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://protectai.com/blog/security-spotlight-appsec-to-ai" title="" class="hs-featured-image-link"&gt; &lt;img src="https://protectai.com/hubfs/Protect%20AI-Blog-Security%20Spotlight-Tyler%20Krause%20Ferris-site.webp" alt="Security Spotlight: AppSec to AI, a Security Engineer's Journey" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;As an application security engineer with over a decade in the trenches of web applications, APIs, and enterprise systems, I never expected my career path would lead me to the frontier of artificial intelligence security. Yet here I am, finding myself both fascinated and challenged by the unique security considerations of AI systems.&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://protectai.com/blog/security-spotlight-appsec-to-ai" title="" class="hs-featured-image-link"&gt; &lt;img src="https://protectai.com/hubfs/Protect%20AI-Blog-Security%20Spotlight-Tyler%20Krause%20Ferris-site.webp" alt="Security Spotlight: AppSec to AI, a Security Engineer's Journey" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;As an application security engineer with over a decade in the trenches of web applications, APIs, and enterprise systems, I never expected my career path would lead me to the frontier of artificial intelligence security. Yet here I am, finding myself both fascinated and challenged by the unique security considerations of AI systems.&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=22563925&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fprotectai.com%2Fblog%2Fsecurity-spotlight-appsec-to-ai&amp;amp;bu=https%253A%252F%252Fprotectai.com%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Cybersecurity</category>
      <category>Supply Chain Vulnerability</category>
      <pubDate>Thu, 12 Jun 2025 17:47:46 GMT</pubDate>
      <guid>https://protectai.com/blog/security-spotlight-appsec-to-ai</guid>
      <dc:date>2025-06-12T17:47:46Z</dc:date>
      <dc:creator>Tyler Ferris</dc:creator>
    </item>
    <item>
      <title>Balancing Velocity and Vulnerability with llamafile</title>
      <link>https://protectai.com/blog/balancing-velocity-vulnerability-llamafile</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://protectai.com/blog/balancing-velocity-vulnerability-llamafile" title="" class="hs-featured-image-link"&gt; &lt;img src="https://protectai.com/hubfs/Protect%20AI-Blog-060325-Rapid-Balancing%20with%20llamafile-social.png" alt="Balancing Velocity and Vulnerability with llamafile" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;The AI ecosystem is witnessing a significant shift towards open source technologies, with llamafile format now powering 32% of self-hosted AI development according to Wiz’s &lt;a href="https://www.wiz.io/reports/the-state-of-ai-in-the-cloud-2025"&gt;2025 State of AI in the Cloud report&lt;/a&gt;. This portable executable format packages complete LLMs into single files, driving rapid adoption while introducing important security considerations.&amp;nbsp;&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://protectai.com/blog/balancing-velocity-vulnerability-llamafile" title="" class="hs-featured-image-link"&gt; &lt;img src="https://protectai.com/hubfs/Protect%20AI-Blog-060325-Rapid-Balancing%20with%20llamafile-social.png" alt="Balancing Velocity and Vulnerability with llamafile" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;The AI ecosystem is witnessing a significant shift towards open source technologies, with llamafile format now powering 32% of self-hosted AI development according to Wiz’s &lt;a href="https://www.wiz.io/reports/the-state-of-ai-in-the-cloud-2025"&gt;2025 State of AI in the Cloud report&lt;/a&gt;. This portable executable format packages complete LLMs into single files, driving rapid adoption while introducing important security considerations.&amp;nbsp;&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=22563925&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fprotectai.com%2Fblog%2Fbalancing-velocity-vulnerability-llamafile&amp;amp;bu=https%253A%252F%252Fprotectai.com%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Adversarial ML</category>
      <category>Threat Intelligence</category>
      <pubDate>Wed, 04 Jun 2025 18:11:25 GMT</pubDate>
      <guid>https://protectai.com/blog/balancing-velocity-vulnerability-llamafile</guid>
      <dc:date>2025-06-04T18:11:25Z</dc:date>
      <dc:creator>Mehrin Kiaini &amp; Faisal Khan</dc:creator>
    </item>
    <item>
      <title>Security Spotlight: Securing Cloud &amp; AI Products with Guardrails</title>
      <link>https://protectai.com/blog/security-spotlight-securing-ai-with-guardrails</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://protectai.com/blog/security-spotlight-securing-ai-with-guardrails" title="" class="hs-featured-image-link"&gt; &lt;img src="https://protectai.com/hubfs/Protect%20AI-Blog-Security%20Spotlight-Junaid%20Khan-site.webp" alt="Security Spotlight: Securing Cloud &amp;amp; AI Products with Guardrails" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p style="line-height: 1.25;"&gt;In today’s landscape where Cloud and AI are becoming the very fabric of digital innovation, security has transcended from its traditional role as an add-on or checklist item into a fundamental component integrated into every phase. Security acts as the specification, a critical gate within the CI/CD pipeline, and a vital runtime safeguard. It’s an intrinsic part of every system, ranging from a public REST API serving millions to the specialized GPU cluster training tomorrow's autonomous agents.&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://protectai.com/blog/security-spotlight-securing-ai-with-guardrails" title="" class="hs-featured-image-link"&gt; &lt;img src="https://protectai.com/hubfs/Protect%20AI-Blog-Security%20Spotlight-Junaid%20Khan-site.webp" alt="Security Spotlight: Securing Cloud &amp;amp; AI Products with Guardrails" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p style="line-height: 1.25;"&gt;In today’s landscape where Cloud and AI are becoming the very fabric of digital innovation, security has transcended from its traditional role as an add-on or checklist item into a fundamental component integrated into every phase. Security acts as the specification, a critical gate within the CI/CD pipeline, and a vital runtime safeguard. It’s an intrinsic part of every system, ranging from a public REST API serving millions to the specialized GPU cluster training tomorrow's autonomous agents.&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=22563925&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fprotectai.com%2Fblog%2Fsecurity-spotlight-securing-ai-with-guardrails&amp;amp;bu=https%253A%252F%252Fprotectai.com%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Secure by Design</category>
      <pubDate>Wed, 28 May 2025 19:37:57 GMT</pubDate>
      <guid>https://protectai.com/blog/security-spotlight-securing-ai-with-guardrails</guid>
      <dc:date>2025-05-28T19:37:57Z</dc:date>
      <dc:creator>Junaid Khan</dc:creator>
    </item>
    <item>
      <title>Assessing the Security of 4 Popular AI Reasoning Models</title>
      <link>https://protectai.com/blog/assessing-security-popular-reasoning-models</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://protectai.com/blog/assessing-security-popular-reasoning-models" title="" class="hs-featured-image-link"&gt; &lt;img src="https://protectai.com/hubfs/Protect%20AI-Blog-052025-Rapid-4%20Top%20AI%20Reasoning%20Models-site.webp" alt="Assessing the Security of 4 Popular AI Reasoning Models" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;In the race to create more capable AI systems, reasoning models stand out as frontrunners.&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://protectai.com/blog/assessing-security-popular-reasoning-models" title="" class="hs-featured-image-link"&gt; &lt;img src="https://protectai.com/hubfs/Protect%20AI-Blog-052025-Rapid-4%20Top%20AI%20Reasoning%20Models-site.webp" alt="Assessing the Security of 4 Popular AI Reasoning Models" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;In the race to create more capable AI systems, reasoning models stand out as frontrunners.&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=22563925&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fprotectai.com%2Fblog%2Fassessing-security-popular-reasoning-models&amp;amp;bu=https%253A%252F%252Fprotectai.com%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Red Teaming</category>
      <category>Model Security</category>
      <pubDate>Wed, 21 May 2025 20:10:30 GMT</pubDate>
      <guid>https://protectai.com/blog/assessing-security-popular-reasoning-models</guid>
      <dc:date>2025-05-21T20:10:30Z</dc:date>
      <dc:creator>Sailesh Mishra &amp; Mukunth Madavan</dc:creator>
    </item>
    <item>
      <title>Specialized Models Beat Single LLMs for AI Security</title>
      <link>https://protectai.com/blog/specialized-models-beat-single-llms-for-ai-security</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://protectai.com/blog/specialized-models-beat-single-llms-for-ai-security" title="" class="hs-featured-image-link"&gt; &lt;img src="https://protectai.com/hubfs/Protect%20AI-Blog-051225-Rapid-Modular%20beats%20monolithic-site.webp" alt="Specialized Models Beat Single LLMs for AI Security" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;As you continue to deploy LLM-powered applications into your enterprise, securing these systems against evolving threats becomes increasingly more complex and critical. Often, security teams are divided on how to best tackle this challenge, with two competing approaches emerging in the market:&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://protectai.com/blog/specialized-models-beat-single-llms-for-ai-security" title="" class="hs-featured-image-link"&gt; &lt;img src="https://protectai.com/hubfs/Protect%20AI-Blog-051225-Rapid-Modular%20beats%20monolithic-site.webp" alt="Specialized Models Beat Single LLMs for AI Security" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;As you continue to deploy LLM-powered applications into your enterprise, securing these systems against evolving threats becomes increasingly more complex and critical. Often, security teams are divided on how to best tackle this challenge, with two competing approaches emerging in the market:&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=22563925&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fprotectai.com%2Fblog%2Fspecialized-models-beat-single-llms-for-ai-security&amp;amp;bu=https%253A%252F%252Fprotectai.com%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>LLM Security</category>
      <pubDate>Tue, 13 May 2025 20:35:58 GMT</pubDate>
      <guid>https://protectai.com/blog/specialized-models-beat-single-llms-for-ai-security</guid>
      <dc:date>2025-05-13T20:35:58Z</dc:date>
      <dc:creator>Jane Leung and Oleksandr Yaremchuk</dc:creator>
    </item>
  </channel>
</rss>
