Transparency visual

PatternAI's Transparency Hub

A look at PatternAI's key processes, programs, and practices for responsible AI development.

01Model Report

Model Report

Last updated December 4, 2025

This summary provides quick access to essential information about the current P-TON model, condensing key details about model capabilities, safety evaluations, and deployment safeguards.

P-TON 𝒮.₈₇ Summary Table

Model description
P-TON 𝒮.₈₇ is hybrid model for advance garment transfer with all intricate design, patterns, motif preserves.
Benchmarked capabilities
See PatternAI system card Section 2 on model capabilities.
Acceptable uses
See our Usage Policy.
Release date
November 2025
Access surfaces
  • PatternAI Web App
  • PatternAI API
  • AWS deployment
  • Google Cloud deployment
  • Azure deployment
Software integration guidance
See our developer documentation.
Modalities
P-TON 𝒮.₈₇ is accept every garment is there NETs withour occurrs any error, including its structured with panomatic way of output diversity.
Knowledge cutoff date
May 2025. The model is highly reliable for information and events up to this date.
Software and hardware used in development
Cloud resources from AWS and GCP with frameworks such as PyTorch.
Model architecture and training methodology
Pretrained on proprietary mixed datasets and post-trained using safety-alignment methods including human and AI feedback loops.
Training data
A proprietary mix of publicly available web data (up to cutoff), licensed third-party data, contractor-labeled data, opted-in user data, and internal synthetic data.
Testing methods and results
Based on our assessments, we deployed P-TON equivalent safeguards with additional post-deployment monitoring.

Safeguards

  • Improved refusal quality and clarifying behavior for ambiguous high-risk requests.
  • Expanded multilingual policy testing across major languages.

Evaluation Awareness

  • We evaluated potential test-awareness behavior and adjusted methods to maintain robust assessments.
  • Assessment confidence was strengthened through multiple complementary evaluation strategies.

RSP Evaluations

  • Risk evaluations assessed potential catastrophic-risk thresholds before release.

CBRN Evaluations

  • CBRN evaluation covered knowledge, reasoning, expert uplift, and constrained protocol viability checks.

Autonomous AI R&D Evaluations

  • Evaluated for long-horizon autonomous AI R&D capability against internal thresholds.
  • Current evidence does not indicate full automation of entry-level remote AI researcher roles.

Cybersecurity Evaluations

  • Evaluated using real-world-oriented cybersecurity challenges across multiple domains.
  • No evidence of catastrophically risky autonomous cyber capability at current release level.