MACHINE UNLEARNING FOR AI LABS

Ship safer frontier models

Frontier models learn everything – including behaviors and data they shouldn’t. Machine Unlearning is a breakthrough in post-training and alignment, delivering optimized results for less time, data, and compute

the problem

The Post-Training Bottleneck

Current post-training methods are often too broad for surgical fixes, demanding high volumes of compute and data while leading to conflicting updates and unpredictable behavioral shifts

Precision Mismatch

SFT and RLHF optimize broad objectives, which can cause wide-scale "collateral damage" when you only need to fix a localized behavior.

Data Retraining Deadlock

Relying on new datasets for every emerging concern is reactive, expensive, and slow

Efficiency Roadblock

Current post-training is blunt and expensive — fixing narrow issues requires weeks of compute, and every new problem triggers another slow retraining cycle

The Solution

Surgical Precision. No Performance Trade-off.

Explicit Behavior Mitigation

Identify and attenuate the specific internal representations driving unwanted patterns—like hallucinations or bias—without disrupting world knowledge or reasoning.

Parameter-Level Data Elimination

Completely eliminate the influence of copyrighted, restricted, or poisoned training data at the parameter level

Transparent, Modular & Reversible Edits

Alignment changes are expressed as Behavior Vectors or LoRA Adapters. These artifacts can be independently evaluated, scaled, or rolled back without modifying the base model weights

the solution

Unlearn any behavior from your LLMs

Hirundo’s solution unlearns customized or pre-defined behaviors from any LLM, ensuring they are removed from the model itself.

The RESULTS

85% reduction in jailbreaks

Successful prompt injections reduced by up to 85%, validated on benchmarks like PurpleLlama across open-weight frontier models.

70% reduction in biases

Unlearned LLMs achieved up to 70% reduction in biases, verified on benchmarks like BBQ.

97% faster

Targeted unlearning completed in hours on standard GPU infrastructure – instead of weeks of retraining.

ENTERPRISE-READY ARCHITECTURE

Built for Security and Transparency

Current post-training methods are often too broad for surgical fixes, demanding high volumes of compute and data while leading to conflicting updates and unpredictable behavioral shifts

White-Box Control: Behavior definitions are derived from explicit, reviewable datasets that can be fully customized by your team.

Secure Deployment: Designed for fully airgapped environments via Kubernetes and S3-compatible storage

Comprehensive Diagnostics: Scan models against custom or industry benchmarks to automatically assess failure points before remediation



FREQUENTLY ASKED QUESTIONS
What problem does Hirundo solve for frontier model builders?
What kinds of issues can Hirundo address?
How does data unlearning work in practice?
How does Hirundo handle learned behaviors like jailbreaks?
How do you know these changes don’t degrade the model?
What results have you observed on frontier-scale models?
How is this different from additional post-training or fine-tuning?
When in the model lifecycle does Hirundo make sense?
What are the limits of this approach?
integrations

Seamless integration with your AI stack

No workflow changes needed. Our SOC-2 certified solution runs as an API or platform, with deployment available via SaaS, VPC, or air-gapped on premises.

testimonials

Leading AI experts trust Hirundo

As AI regulation evolves, cost effective Machine Unlearning technology will become a must.

Avi Tel-Or

CTO, Intel Ignite

I've tried many data quality solutions. Hirundo finds data issues and mislabels at a level I’ve never seen before.

Dan Erez

AI Tech Lead, Taranis