Redis to Acquire Real-Time Data Platform Decodable; Expands Redis for AI to Deliver Context and Memory for AI Agents and Agentic Systems
Decodable acquisition, managed semantic caching service, and other updates to Redis for AI strengthen Redis’ capabilities as a real-time context engine for agentic systems
SAN FRANCISCO, Sept. 04, 2025 (GLOBE NEWSWIRE) -- Redis, the world’s fastest data platform, today announced a major expansion of its AI strategy at Redis Released 2025.
In his keynote, Redis CEO Rowan Trollope announced Redis’ intention to acquire real-time data platform Decodable, the public preview of Redis’ new LangCache service, and several other improvements to Redis for AI that make it easier for developers to build agents with reliable, persistent memory. Together, these moves accelerate Redis’ evolution from the fastest in-memory data store to an essential infrastructure layer for AI, delivering the context and memory that intelligent agents depend on.
“As AI enters its next phase, the challenge isn’t proving what language models can do; it’s giving them the context and memory to act with relevance and reliability,” said Rowan Trollope, CEO of Redis. “As technology becomes ever more reliant on LLMs, the strategic investment we’re making in Decodable’s platform will make it easier for developers to build and expand data pipelines and convert that data into context within Redis, so it’s fast and always available in the right place at the right time.”
Founded by data infrastructure veteran Eric Sammer, Decodable is a serverless platform that simplifies real-time data ingestion, transformation, and delivery. Its technology replaces weeks of custom pipeline engineering with a declarative service that scales automatically.
“Joining Redis allows us to take Decodable’s vision further and get there faster,” said Eric Sammer, founder and CEO of Decodable. “Together, we can give developers a seamless way to connect and act on their data in real time—unlocking AI systems that are more capable, responsive, and deeply embedded in the workflows they serve.”
Redis also announced the public preview of LangCache, a fully-managed semantic caching service which cuts the latency and token usage for LLM-dependent applications by as much as 70%, and announced several updates to its AI infrastructure tools, including hybrid search enhancements, integrations with agent frameworks for AutoGen and Cognee.
LangCache public preview
LangCache is Redis’ fully-managed semantic caching solution that stores and retrieves semantically similar calls to LLMs for chatbots and agents, saving roundtrip latency and drastically cutting token usage.
The performance and cost improvements are substantial:
- Up to 70% reduction in LLM API costs, especially in high-traffic applications
- 15x faster response times for cache hits compared to live LLM inference
- Improved end-user experience with lower latency and more consistent outputs
LangCache is in public preview today.
New agent integrations and agent memory
It’s now easier to use Redis with existing AI frameworks and tools. Our ecosystem integrations let developers store their data the way they want, without needing to write custom code. New integrations with AutoGen, Cognee, plus new enhancements with LangGraph expand how developers can use Redis’ scalable, persistent memory layer for agents and chatbots.
Build with:
- AutoGen as a framework while getting the fast-data memory layer of Redis and build with existing templates
- Cognee to simplify memory management with built-in summarization, planning, and reasoning using Redis as your backbone
-
LangGraph with new enhancements to improve your persistent memory and make your AI agents more reliable
Other Redis for AI improvements
With the rapid emergence of AI agents, Redis is ensuring that its users have the ability to build high-quality agents using Redis for AI. Redis integrations with popular AI agent frameworks enable users to build agents faster and with more reliability. Key agent-focused capabilities include:
- Hybrid search enhancements. Redis now includes Reciprocal Rank Fusion, a method to unify text and vector rankings into a single, more relevant result set.
- Quantization. Redis now supports int8 quantized embeddings. Compress float vectors to 8‑bit integers for a smaller memory footprint and faster search performance for 75% memory savings and 30% faster search speeds in large‑scale AI applications
Redis Cloud and Redis Open Source updates
Redis also announced multiple updates to Redis Cloud and Redis Open Source, including:
- Redis 8.2 GA. Bringing a huge leap in performance with 35% faster commands versus Redis 8.0, 37% smaller memory footprint, and improvements to Redis Query Engine, including 18 data structures including vector sets, and 480+ commands like hash field expiration.
- Redis Data Integration (RDI) public preview on Cloud. Keep Redis caches fresh and in-sync with source databases using easy-to-setup, zero-code data pipelines. Speed up legacy data to become real time in minutes.
-
Redis Insight available on Cloud. Act on Redis data straight from a browser. Visualize and cut debugging time from hours to minutes, without having to open a terminal and context switch, to keep on top of Redis performance.
About Redis
Redis is the world’s fastest data platform. From its open source origins in 2011 to becoming the #1 cited brand for caching solutions, Redis has helped more than 10,000 customers build, scale, and deploy the apps our world runs on. With multi-cloud and on-prem databases for caching, vector search, and more, Redis helps digital businesses set a new standard for app speed.
Located in San Francisco, Austin, London, and Tel Aviv, Redis is internationally recognized as the leader in building fast apps fast. Learn more at redis.io.
Media Contact
LaunchSquad
Redis@launchsquad.com

Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
