
Making ai forget
Your AI models contain risks you can’t control.
Until now.
Stop choosing between speed and safety. Machine Unlearning surgically removes risky knowledge and behavior from trained LLMs at the model level, without retraining.
Trusted by AI leaders:
When models misbehave, current solutions are only a band-aid.
Jailbreak vulnerabilities. Hallucinations. Bias. Toxic outputs. Memoized PII.
When they surface, you either live with imperfect defenses or disrupt everything to retrain.
Guardrails and output filtering
Can (and will) be bypassed. Model itself stays compromised.
Retraining or fine-tuning
Months of delay while competitors ship and you lose market window.
Fix what your model learned, without starting over
Machine Unlearning lets you fix risks at the model level – whether you’re preparing to ship, responding to issues in production, or continuously hardening models over time.
Unblock launches in hours, not weeks
Harden models before deployment in days, so late-stage issues don't derail your launch.
Fix model risks at the core
Remove memorized PII completely, reduce jailbreacks 85%, and correct bias 70% –at the model level, not the perimeter.
Continuously harden in production
Fix production issues and improve model behavior without downtime. No more waiting months for retraining.
How Hirundo works
Detect risky knowledge and behavior
Built-in evaluations and red-team testing pinpoint risky behavior, PII, and other undesirable behavior in your models – so you know exactly what to fix.
Surgically target it with Machine Unlearning
Hirundo identifies and modifies the specific parameters responsible for the unwanted behavior, removing what the model learned (without harming model utility).
Get a fixed model in hours
A higher performing, unlearned model is generated in a fraction of the time it’d take to retrain it.
Stop gambling with your production AI
You shouldn’t have to choose between moving fast and keeping models production-safe. With Machine Unlearning, you don’t.
85% reduction in jailbreaks
Unlearned models are up to 85% more protected against prompt injections, verified on benchmarks like PurpleLlama.
70% reduction in biases
Our unlearned LLMs achieved up to 70% reduction in biases, verified on benchmarks like BBQ.
100% PII removed
We achieved 100% removal of fine-tuned PII from LLMs, with zero impact on other data or functionality.
Leading AI experts trust Hirundo

As AI regulation evolves, cost effective Machine Unlearning technology will become a must.

Avi Tel-Or
CTO, Intel Ignite

I've tried many data quality solutions. Hirundo finds data issues and mislabels at a level I’ve never seen before.

Dan Erez
AI Tech Lead, Taranis
Seamless integration with your AI stack
No need to change workflows
Ready to forget?
Start removing unwanted data with a few clicks







