Below you will find pages that utilize the taxonomy term “CodeJonesW”
March 14, 2026
sensei-eval
Version updated for https://github.com/CodeJonesW/sensei-eval to version v0.8.0.
This action is used across all versions by 0 repositories. Action Type This is a Composite action.
Go to the GitHub Marketplace to find the latest changes.
Action Summary The sensei-eval GitHub Action and TypeScript library streamline the evaluation of AI-generated educational content by performing deterministic checks and leveraging LLM scoring. It automates the detection of content quality regressions in CI workflows, enabling teams to maintain consistent prompt quality.
March 12, 2026
sensei-eval
Version updated for https://github.com/CodeJonesW/sensei-eval to version v0.6.0.
This action is used across all versions by 0 repositories. Action Type This is a Composite action.
Go to the GitHub Marketplace to find the latest changes.
Action Summary The sensei-eval GitHub Action evaluates AI-generated educational content for quality using both deterministic checks and LLM-based scoring. It automates the detection of prompt quality regressions in CI workflows, enabling consistent content evaluation and quality assurance.
March 9, 2026
sensei-eval
Version updated for https://github.com/CodeJonesW/sensei-eval to version v0.5.0.
This action is used across all versions by 0 repositories. Action Type This is a Composite action.
Go to the GitHub Marketplace to find the latest changes.
Action Summary The sensei-eval GitHub Action is designed to evaluate AI-generated educational content using deterministic checks and LLM-based scoring. It automates the detection of prompt quality regressions in CI workflows by comparing content against predefined baselines and identifying any significant score drops.