Privacy Filter OpenAI project overview Join the Waitlist

OpenAI Privacy Filter

PII detection and masking for long-text, local redaction workflows.

Privacy Filter is a bidirectional token-classification model designed to spot privacy spans in one forward pass, decode coherent boundaries, and fit into local sanitization pipelines where teams want speed, control, and tunable policy behavior.

Independent landing page built from public OpenAI materials. Useful for inbound search and deployment-led conversion, not an official OpenAI property.

License
Apache 2.0
Footprint
1.5B total / 50M active
Context
128K tokens
Taxonomy
8 span categories

Raw input

Nina Patel can be reached at [email protected] or +1 415 555 0114 . Her onboarding token is sk-live-83x...
01

Classify each token over the privacy taxonomy.

02

Decode stable BIOES spans with constrained Viterbi scoring.

03

Route output into redact, eval, or train workflows.

Operating point Shift precision or recall depending on your review cost.
Runtime tunable

Why teams look at it

A compact model tuned for practical data minimization pipelines.

The upstream project emphasizes local control, long text coverage, and the ability to adjust span behavior without moving every workflow into a hosted black box.

Local-first runtime

Fits on laptops, runs in browsers, and can stay on-prem.

Use CPU or GPU execution paths depending on throughput needs, while keeping sensitive text inside your own environment.

Long context

Process large documents without the usual chunking tax.

The 128K-token context window is aimed at logs, transcripts, reviews, and other long documents where chunk boundaries can break detection quality.

Tunable behavior

Change boundary sensitivity with operating-point presets.

Sequence-decoding controls let teams bias toward more recall or more precision depending on audit load and downstream risk.

Adaptable

Fine-tune when your privacy policy differs from the default label policy.

The reference repo includes train and eval flows so teams can calibrate against in-domain examples instead of treating the base checkpoint as final truth.

How it works

Three layers: classify, decode, decide.

01

Single-pass token labeling

The model predicts label probabilities over privacy classes for every token in the input sequence, rather than generating output text token by token.

02

Constrained span decoding

BIOES boundary tags are decoded with sequence-level constraints so boundaries stay more coherent than a naive per-token argmax.

03

Policy tuning and review

Operating-point settings, evaluation runs, and optional fine-tuning let teams map the detector to their own data distributions and review cost.

Input text 33 token-level classes 8 span categories

The published taxonomy expands each privacy class into boundary-aware BIOES labels plus an outside class, which yields 33 token-level outputs before span consolidation.

Label taxonomy

Eight categories cover the default span policy.

The base checkpoint focuses on identifying strongly person-linked spans and secrets. If your governance boundary is different, the repo encourages local evaluation and tuning.

private_person

Personal names and person-linked references that should be masked in a privacy-preserving view.

private_email

Email addresses that directly identify or reach an individual account.

private_phone

Phone numbers and contact strings tied to a person or private endpoint.

private_address

Street and mailing details that reveal a private location.

private_date

Dates with privacy sensitivity, such as birthdays or other personally linked dates.

account_number

Account, reference, or financial number strings with strong identifier risk.

private_url

Private profile, invite, or identifying URLs that should not be exposed downstream.

secret

API keys, credentials, tokens, and other strings that behave like secrets.

Repo modes

The reference repository exposes three practical entry points.

Redact

Run one-shot masking from text, files, or pipes.

opf "Alice was born on 1990-01-02."

Useful for previews, CLI workflows, and wiring the detector into scripted text pipelines.

Eval

Score a checkpoint on labeled JSONL fixtures.

opf eval examples/data/sample_eval_five_examples.jsonl

Use this path to understand domain fit before you trust the model on production documents.

Train

Fine-tune the checkpoint to your local policy boundary.

opf train /path/to/train.jsonl --output-dir /path/to/checkpoint

Best fit when the default taxonomy, span boundaries, or domain language do not match your own data.

Limits and review costs

This is a redaction aid, not a final privacy guarantee.

The model can help reduce exposure, but the upstream materials are explicit that production use still requires evaluation, governance, and review paths.

Over-reliance creates blind spots

Do not treat detected spans as a complete anonymization claim or compliance boundary by themselves.

Policy is static by default

The checkpoint only sees the trained taxonomy. Different organizations may need different policies.

Distribution shift still matters

Names, domains, languages, and credential formats outside training patterns can reduce reliability.

Human review stays necessary

Medical, legal, financial, HR, education, and government workflows need tighter review loops.

FAQ

Common privacy filter questions from evaluation-stage teams.

What is OpenAI Privacy Filter?

It is a bidirectional token-classification model that detects privacy-sensitive spans in text and then decodes them into coherent redaction spans.

Can Privacy Filter run in my own environment?

Yes. The project is aimed at local or on-prem workflows and can be run on CPU or GPU depending on the checkpoint and deployment choice.

What does the default taxonomy detect?

The published base categories are account numbers, addresses, emails, person names, phone numbers, private URLs, private dates, and secrets.

Is Privacy Filter enough for compliance on its own?

No. It should be one layer in a broader privacy-by-design system that also includes policy definition, evaluation, and human review where mistakes are costly.

When should teams fine-tune the model?

Fine-tuning is the right move when your document formats, decision boundaries, or domain language do not line up with the base checkpoint.

Waitlist / contact

Use this page as the front door for privacy-filter interest.

The site is ready to capture inbound attention while you package an integration offer, evaluation service, or domain-specific wrapper around the underlying model.