Below you will find pages that utilize the taxonomy term “Zero-Harm-AI-LLC”
April 10, 2026
PromptShield AI Security
Version updated for https://github.com/Zero-Harm-AI-LLC/promptshield to version v1.0.5.
This action is used across all versions by ? repositories. Action Type This is a Composite action.
Go to the GitHub Marketplace to find the latest changes.
Action Summary PromptShield AI Security is a GitHub Action that automates the detection of AI-specific security risks in pull requests by scanning code changes for vulnerabilities such as prompt injection, secrets exposure, PII leaks, and unsafe usage of large language models (LLMs). It provides actionable feedback through GitHub Actions annotations, generates detailed reports (JSON, Markdown, SARIF), and supports reviewer-style PR feedback workflows. This tool helps teams proactively identify and mitigate security risks associated with integrating AI and LLMs into their codebases.
April 8, 2026
PromptShield AI Security
Version updated for https://github.com/Zero-Harm-AI-LLC/promptshield to version v1.0.4.
This action is used across all versions by ? repositories. Action Type This is a Composite action.
Go to the GitHub Marketplace to find the latest changes.
Action Summary PromptShield AI Security is a GitHub Action that automatically scans pull requests for AI-specific security risks, such as prompt injection vulnerabilities, exposure of secrets or PII, unsafe use of large language models (LLMs), and improper handling of sensitive data. It automates the detection of potential issues in PR diffs, providing actionable feedback through annotations, JSON/Markdown reports, and reviewer-style comments to enhance code security and compliance. By integrating seamlessly into workflows, it helps teams proactively address AI-related risks during the development process.
April 8, 2026
PromptShield AI Security
Version updated for https://github.com/Zero-Harm-AI-LLC/promptshield to version v1.0.3.
This action is used across all versions by ? repositories. Action Type This is a Composite action.
Go to the GitHub Marketplace to find the latest changes.
Action Summary PromptShield AI Security is a GitHub Action designed to detect AI-related security risks in pull requests by scanning code changes for issues like prompt injection vulnerabilities, secrets exposure, sensitive data leaks, and unsafe usage of language model (LLM) tools. It automates the process of identifying and reporting these risks, offering outputs such as GitHub Actions annotations, JSON, Markdown, and SARIF formats, as well as reviewer-style feedback for streamlined code reviews. By integrating with zero-harm-ai-detectors, it enhances security and compliance in workflows involving AI-driven systems.
April 7, 2026
PromptShield AI Security
Version updated for https://github.com/Zero-Harm-AI-LLC/promptshield to version v1.0.2.
This action is used across all versions by ? repositories. Action Type This is a Composite action.
Go to the GitHub Marketplace to find the latest changes.
Action Summary PromptShield AI Security is a GitHub Action designed to identify AI-specific security risks in pull requests by scanning code changes for vulnerabilities such as prompt injection, secrets exposure, PII leaks, and unsafe usage of large language models (LLMs). It automates the detection of these issues, provides actionable feedback through GitHub Actions annotations, and supports output formats like JSON, Markdown, and SARIF for integration into development workflows. This tool enhances code review processes by helping teams mitigate risks associated with integrating AI systems.
April 2, 2026
PromptShield AI Security
Version updated for https://github.com/Zero-Harm-AI-LLC/promptshield to version v1.0.0.
This action is used across all versions by ? repositories. Action Type This is a Composite action.
Go to the GitHub Marketplace to find the latest changes.
Action Summary PromptShield AI Security is a GitHub Action and CLI tool designed to detect AI-specific security risks in pull requests by analyzing code changes for vulnerabilities such as prompt injection risks, secrets exposure, PII leaks, unsafe LLM usage, and sensitive data handling. It automates security scanning, provides actionable feedback through GitHub Actions annotations, and generates outputs in multiple formats (e.g., JSON, Markdown, SARIF). This tool streamlines AI-related code reviews, enhancing security and reducing the risk of propagating vulnerabilities into production.