A 3d box featuring Hirundo's logo, a swallow bird

Making ai forget

Your AI models contain risks you can’t control.
Until now.

Stop choosing between speed and safety. Machine Unlearning surgically removes risky knowledge and behavior from trained LLMs at the model level, without retraining.

The problem

When models misbehave, current solutions are only a band-aid.

Jailbreak vulnerabilities. Hallucinations. Bias. Toxic outputs. Memoized PII.

When they surface, you either live with imperfect defenses or disrupt everything to retrain.

Guardrails and output filtering

Can (and will) be bypassed. Model itself stays compromised.

Retraining or fine-tuning

Months of delay while competitors ship and you lose market window.

The Solution

Fix what your model learned, without starting over

Machine Unlearning lets you fix risks at the model level – whether you’re preparing to ship, responding to issues in production, or continuously hardening models over time.

Unblock launches in hours, not weeks

Harden models before deployment in days, so late-stage issues don't derail your launch.

Learn more

Fix model risks at the core

Remove memorized PII completely, reduce jailbreacks 85%, and correct bias 70%  –at the model level, not the perimeter.

Learn more

Continuously harden in production

Fix production issues and improve model behavior without downtime. No more waiting months for retraining.

Learn more

How Hirundo works

Detect risky knowledge and behavior

Built-in evaluations and red-team testing pinpoint risky behavior, PII, and other undesirable behavior in your models – so you know exactly what to fix.

Surgically target it with Machine Unlearning

Hirundo identifies and modifies the specific parameters responsible for the unwanted behavior, removing what the model learned (without harming model utility).

Get a fixed model in hours

A higher performing, unlearned model is generated in a fraction of the time it’d take to retrain it.

Stop gambling with your production AI

You shouldn’t have to choose between moving fast and keeping models production-safe. With Machine Unlearning, you don’t.

LLM VULNERABILITIES

85% reduction in jailbreaks

Unlearned models are up to 85% more protected against prompt injections, verified on benchmarks like PurpleLlama.

BIAS

70% reduction in biases

Our unlearned LLMs achieved up to 70% reduction in biases, verified on benchmarks like BBQ.

DATA LEAKAGE

100% PII removed

We achieved 100% removal of fine-tuned PII from LLMs, with zero impact on other data or functionality.

The Problem

AI models are not trustworthy enough for enterprise adoption

Be it LLMs or non-generative models, AI poses intolerable risks in production environments: inaccuracies, vulnerabilities, compliance issues.

LLM Unlearning

Models can't be protected just by external guardrails. We fix AI at its core.

Unlearning allows you to remediate issues in the model itself, pushing your AI to its full potential.

Data QA for nongenerative ai

Automatically boost data and AI accuracy

Most mission-critical AI projects don't reach production. We make sure yours do. Aside from LLMs, our platform offers automated QA for vision, radar, timeseries, STT and NLP datasets and models.

Learn more
testimonials

Leading AI experts trust Hirundo

As AI regulation evolves, cost effective Machine Unlearning technology will become a must.

Avi Tel-Or

CTO, Intel Ignite

I've tried many data quality solutions. Hirundo finds data issues and mislabels at a level I’ve never seen before.

Dan Erez

AI Tech Lead, Taranis

integrations

Seamless integration with your AI stack

No need to change workflows

Ready to forget?

Start removing unwanted data with a few clicks