MACHINE UNLEARNING FOR SECURITY & DATA PRIVACY

Remediate AI model risks at the source

Unlearn risky data and behavior your models should never have learned. Machine Unlearning provides core remediation — not just perimeter defense.

the problem

Perimeter defenses don’t fix what’s inside the model

When models memorize PII, develop jailbreak vulnerabilities, or absorb poisoned training data, guardrails and monitoring only manage the symptoms. The risk stays in the model.

Models know and do things they shouldn’t

Whether your model is pre-trained or fine-tuned, it contains data and weaknesses that can be exploited in production.

Guardrails get bypassed

Output filtering and prompt controls are important layers, but adversaries find ways around them.

No way to fix the model itself

Process controls and fine-tuning add overhead and mask LLM vulnerabilities — without actually remediating them.

The Solution

Harden against jailbreaks and attacks

Reduce prompt injection vulnerabilities by 85% and strengthen models against adversarial attacks — at the parameter level, not just the perimeter.

Remove memorized data completely

Eliminate the risk of data leakage or exfiltration with 100% removal of PII, PHI, and sensitive information from trained models. Not filtered from outputs — actually gone from the model’s parameters.

Reduce risks in hours, not months

When security issues are discovered — whether in testing or production — mitigate them in hours without taking systems offline. No months-long retraining, no service disruption.

the solution

Unlearn any behavior from your LLMs

Hirundo’s solution unlearns customized or pre-defined behaviors from any LLM, ensuring they are removed from the model itself.

The RESULTS

85% reduction in jailbreaks

Unlearned models are up to 85% more protected against prompt injections, verified on benchmarks like PurpleLlama.

100% PII removal

100% removal of fine-tuned PII from LLMs, with zero impact on other data or functionality.

70% reduction in biases

Our unlearned LLMs achieved up to 70% reduction in biases, verified on benchmarks like BBQ.

FREQUENTLY ASKED QUESTIONS
What types of risks does Hirundo target at the model level?
Why aren’t guardrails and output monitoring sufficient?
Can Hirundo actually remove sensitive data from a trained model?
How does Hirundo handle risks like jailbreaks and prompt injection?
How do you validate that unlearning actually reduced risk?
Does this work for models that are already deployed?
What kinds of models can Hirundo work with?
What are the limits of this approach?
When should teams use Hirundo instead of retraining?
How does this change the security posture of an AI system?
integrations

Seamless integration with your AI stack

No need to change workflows

testimonials

Leading AI experts trust Hirundo

As AI regulation evolves, cost effective Machine Unlearning technology will become a must.

Avi Tel-Or

CTO, Intel Ignite

I've tried many data quality solutions. Hirundo finds data issues and mislabels at a level I’ve never seen before.

Dan Erez

AI Tech Lead, Taranis