Skip to content

Preparing Archive

Core
5d 1h ago
Safe

recallmax

FREE — God-tier long-context memory for AI agents. Injects 500K-1M clean tokens, auto-summarizes with tone/intent preservation, compresses 14-turn history into 800 tokens.

.agents/skills/recallmax TypeScript
TY
JA
BA
4+ layers Tracked stack
Capabilities
0
Signals
0
Related
3
0
Capabilities
Actionable behaviors documented in the skill body.
0
Phases
Operational steps available for guided execution.
0
References
Support files available for deeper usage and onboarding.
0
Scripts
Runnable or reusable automation artifacts discovered locally.

Architectural Overview

Skill Reading

"This module is grounded in ai engineering patterns and exposes 1 core capabilities across 1 execution phases."

RecallMax — God-Tier Long-Context Memory

Overview

RecallMax enhances AI agent memory capabilities dramatically. Inject 500K to 1M clean tokens of external context without hallucination drift. Auto-summarize conversations while preserving tone, sarcasm, and intent. Compress multi-turn histories into high-density token sequences.

Free forever. Built by the Genesis Agent Marketplace.

Install

npx skills add christopherlhammer11-ai/recallmax

When to Use This Skill

  • Use when your agent loses context in long conversations (50+ turns)
  • Use when injecting large RAG/external documents into agent context
  • Use when you need to compress conversation history without losing meaning
  • Use when fact-checking claims across a long thread
  • Use for any agent that needs to remember everything

How It Works

Step 1: Context Injection

RecallMax cleanly injects external context (documents, RAG results, prior conversations) into the agent's working memory. Unlike naive concatenation, it:

  • Deduplicates overlapping content
  • Preserves source attribution
  • Prevents hallucination drift from context pollution

Step 2: Adaptive Summarization

As conversations grow, RecallMax automatically summarizes older turns while preserving:

  • Tone — sarcasm, formality, urgency
  • Intent — what the user actually wants vs. what they said
  • Key facts — numbers, names, decisions, commitments
  • Emotional register — frustration, excitement, confusion

Step 3: History Compression

Compress a 14-turn conversation history into ~800 high-density tokens that retain full semantic meaning. The compressed output can be re-expanded if needed.

Step 4: Fact Verification

Built-in cross-reference checks for controversial or ambiguous claims within the conversation context. Flags contradictions and unsupported assertions.

Best Practices

  • ✅ Use RecallMax at the start of long-running agent sessions
  • ✅ Enable auto-summarization for conversations beyond 20 turns
  • ✅ Use compression before hitting context window limits
  • ✅ Let the fact verifier run on high-stakes outputs
  • ❌ Don't inject unvetted external content without dedup
  • ❌ Don't skip summarization and rely on raw truncation

Related Skills

  • @tool-use-guardian - Tool-call reliability wrapper (also free from Genesis Marketplace)

Links

Primary Stack

TypeScript

Tooling Surface

Guide only

Workspace Path

.agents/skills/recallmax

Operational Ecosystem

The complete hardware and software toolchain required.

This skill is mostly documentation-driven and does not expose extra scripts, references, examples, or templates.

Module Topology

Skill File
Parsed metadata
Skills UI
Launch context
Chat Session
Antigravity Core

Antigravity Core

Principal Engineering Agent

A high-performance agentic architecture developed by Deepmind for autonomous coding tasks.
120 Installs
4.2 Reliability
1 Workspace Files
4.2
Workspace Reliability Avg
5
68%
4
22%
3
10%
2
0%
1
0%
No explicit validation signals were parsed for this skill yet, but the module remains available for inspection and chat launch.

Recommended for this workflow

Adjacent modules that complement this skill surface

Loading content
Cart