AIOSQLite Backend¶
Overview¶
AIOSQLite is an asynchronous wrapper for SQLite that runs operations in a thread pool executor, providing native async/await support for Python’s built-in SQLite database. This makes it ideal for async web applications, testing, and lightweight async data pipelines that need embedded database capabilities.
Key Features:
Native Async Support: True async/await interface via aiosqlite
Zero Configuration: Embedded database with no server setup
Thread Pool Executor: Runs SQLite operations in background threads
Same SQLite Features: Full access to all SQLite capabilities
File-Based or In-Memory: Flexible storage options
ACID Transactions: Reliable transaction support
WAL Mode: Better concurrency with Write-Ahead Logging
Ideal Use Cases:
Async web applications (FastAPI, Litestar, Starlette)
Async testing and development environments
Async data pipelines with moderate write frequency
Embedded async applications
Prototyping async AI agent applications
Warning
SQLite has single-writer limitations. While aiosqlite provides async access, SQLite itself only supports one write transaction at a time. For production AI agents with high-concurrency writes, consider PostgreSQL (asyncpg) or MySQL (asyncmy). AIOSQLite is best suited for async development, testing, and moderate-concurrency scenarios.
Installation¶
Install SQLSpec with AIOSQLite support:
pip install sqlspec[aiosqlite] google-genai
# or
uv pip install sqlspec[aiosqlite] google-genai
Note
AIOSQLite is included with SQLSpec’s async extras. No additional database server needed!
Quick Start¶
Basic Async File-Based Database¶
import asyncio
from sqlspec.adapters.aiosqlite import AiosqliteConfig
from sqlspec.adapters.aiosqlite.adk import AiosqliteADKStore
from sqlspec.extensions.adk import SQLSpecSessionService
async def main():
# Create async file-based database
config = AiosqliteConfig(pool_config={"database": "./agent_sessions.db"})
store = AiosqliteADKStore(config)
await store.create_tables()
service = SQLSpecSessionService(store)
# Create session with async/await
session = await service.create_session(
app_name="async_chatbot",
user_id="user_123",
state={"mode": "conversational"}
)
print(f"Created session: {session['id']}")
asyncio.run(main())
Async In-Memory Database (Testing)¶
import asyncio
from sqlspec.adapters.aiosqlite import AiosqliteConfig
from sqlspec.adapters.aiosqlite.adk import AiosqliteADKStore
async def test_setup():
# Create async in-memory database
config = AiosqliteConfig(pool_config={"database": ":memory:"})
store = AiosqliteADKStore(config)
await store.create_tables()
# Perfect for async tests!
return store
asyncio.run(test_setup())
Tip
In-memory databases are excellent for async unit tests and ephemeral workloads.
Use shared memory mode (file::memory:?cache=shared) to share across connections.
Configuration¶
Basic Configuration¶
from sqlspec.adapters.aiosqlite import AiosqliteConfig
config = AiosqliteConfig(
pool_config={
"database": "/path/to/database.db", # or ":memory:"
"timeout": 5.0, # Connection timeout
"isolation_level": "DEFERRED", # Transaction isolation
"check_same_thread": False, # Allow multi-thread (safe with aiosqlite)
"uri": True, # Enable URI mode for advanced features
}
)
Note
AIOSQLite automatically converts :memory: to file::memory:?cache=shared
for better connection sharing in async environments.
Connection Pooling¶
config = AiosqliteConfig(
pool_config={
"database": "./sessions.db",
"pool_size": 5, # Connection pool size
"connect_timeout": 30.0, # Pool acquire timeout
"idle_timeout": 3600.0, # Idle connection timeout
"operation_timeout": 10.0, # Per-operation timeout
}
)
Custom Table Names¶
store = AiosqliteADKStore(
config,
session_table="chatbot_sessions",
events_table="chatbot_events"
)
Schema¶
The AIOSQLite ADK store uses the same schema as the sync SQLite adapter, optimized for SQLite’s capabilities.
Sessions Table¶
CREATE TABLE IF NOT EXISTS adk_sessions (
id TEXT PRIMARY KEY,
app_name TEXT NOT NULL,
user_id TEXT NOT NULL,
state TEXT NOT NULL DEFAULT '{}', -- JSON as TEXT
create_time REAL NOT NULL, -- Julian Day number
update_time REAL NOT NULL -- Julian Day number
);
CREATE INDEX idx_adk_sessions_app_user
ON adk_sessions(app_name, user_id);
CREATE INDEX idx_adk_sessions_update_time
ON adk_sessions(update_time DESC);
Events Table¶
CREATE TABLE IF NOT EXISTS adk_events (
id TEXT PRIMARY KEY,
session_id TEXT NOT NULL,
app_name TEXT NOT NULL,
user_id TEXT NOT NULL,
invocation_id TEXT NOT NULL,
author TEXT NOT NULL,
actions BLOB NOT NULL, -- Pickled actions
long_running_tool_ids_json TEXT,
branch TEXT,
timestamp REAL NOT NULL, -- Julian Day number
content TEXT, -- JSON as TEXT
grounding_metadata TEXT, -- JSON as TEXT
custom_metadata TEXT, -- JSON as TEXT
partial INTEGER, -- Boolean as 0/1/NULL
turn_complete INTEGER, -- Boolean as 0/1/NULL
interrupted INTEGER, -- Boolean as 0/1/NULL
error_code TEXT,
error_message TEXT,
FOREIGN KEY (session_id) REFERENCES adk_sessions(id) ON DELETE CASCADE
);
CREATE INDEX idx_adk_events_session
ON adk_events(session_id, timestamp ASC);
Note
SQLite Data Type Mapping:
TEXT: Strings, JSON (via
to_json/from_json)REAL: Julian Day timestamps (efficient date arithmetic)
INTEGER: Booleans (0=False, 1=True, NULL=None)
BLOB: Binary data (pickled actions from Google ADK)
Foreign key constraints are enabled per connection via PRAGMA foreign_keys = ON.
Usage Patterns¶
Async Context Managers¶
from sqlspec.adapters.aiosqlite import AiosqliteConfig
from sqlspec.adapters.aiosqlite.adk import AiosqliteADKStore
async def use_store():
config = AiosqliteConfig(pool_config={"database": ":memory:"})
# Async context manager for connections
async with config.provide_connection() as conn:
await conn.execute("PRAGMA journal_mode=WAL")
await conn.commit()
# Or use the store directly
store = AiosqliteADKStore(config)
await store.create_tables()
Native Async/Await Operations¶
from datetime import datetime, timezone
# All store operations are native async
session = await store.create_session(
session_id="session_123",
app_name="assistant",
user_id="user_456",
state={"context": "active"}
)
# Retrieve with await
retrieved = await store.get_session("session_123")
# Update session state
await store.update_session_state(
"session_123",
{"context": "active", "last_query": "What's the weather?"}
)
# List sessions
sessions = await store.list_sessions("assistant", "user_456")
# Get events with filtering
recent_events = await store.get_events(
session_id="session_123",
after_timestamp=datetime.now(timezone.utc),
limit=50
)
# Delete session (cascade deletes events)
await store.delete_session("session_123")
from datetime import datetime, UTC
# All store operations are native async
session = await store.create_session(
session_id="session_123",
app_name="assistant",
user_id="user_456",
state={"context": "active"}
)
# Retrieve with await
retrieved = await store.get_session("session_123")
# Update session state
await store.update_session_state(
"session_123",
{"context": "active", "last_query": "What's the weather?"}
)
# List sessions
sessions = await store.list_sessions("assistant", "user_456")
# Get events with filtering
recent_events = await store.get_events(
session_id="session_123",
after_timestamp=datetime.now(UTC),
limit=50
)
# Delete session (cascade deletes events)
await store.delete_session("session_123")
JSON Serialization¶
# JSON is stored as TEXT using SQLSpec serializers
state = {
"preferences": {"theme": "dark", "language": "en"},
"conversation_mode": "chat",
"tools_enabled": ["web_search", "calculator"]
}
session = await store.create_session(
session_id="sess_1",
app_name="app",
user_id="user",
state=state
)
# Retrieved state is automatically deserialized
retrieved = await store.get_session("sess_1")
print(retrieved["state"]["preferences"]["theme"]) # "dark"
Note
SQLSpec uses the best available JSON serializer: msgspec > orjson > stdlib json.
All JSON serialization is handled transparently via to_json/from_json.
Performance Considerations¶
Thread Pool Executor Model¶
AIOSQLite runs SQLite operations in a thread pool to provide async access:
Implications:
Thread switching overhead: Each operation switches to a thread pool
Batch operations recommended: Use
executemany()for bulk insertsNot true parallelism: SQLite’s single-writer model still applies
Best Practices:
# BAD: Many individual async operations
for event in events:
await store.append_event(event)
# BETTER: Batch when possible (consider implementing executemany)
# Or accept the overhead for simplicity in low-frequency scenarios
WAL Mode for Better Concurrency¶
Enable Write-Ahead Logging (WAL) mode for better concurrent read/write performance:
async with config.provide_connection() as conn:
await conn.execute("PRAGMA journal_mode=WAL")
await conn.commit()
Benefits:
Multiple readers can access database while writer is active
Better performance for read-heavy workloads
Reduced lock contention
Trade-offs:
Slightly more disk I/O
Requires file system that supports WAL (supported on most systems)
Performance Tuning PRAGMAs¶
async with config.provide_connection() as conn:
# Enable WAL mode (recommended)
await conn.execute("PRAGMA journal_mode=WAL")
# Faster synchronization (less durable, but faster)
await conn.execute("PRAGMA synchronous=NORMAL")
# Increase cache size (64MB)
await conn.execute("PRAGMA cache_size=-64000")
# Memory-mapped I/O (256MB)
await conn.execute("PRAGMA mmap_size=268435456")
await conn.commit()
Warning
PRAGMA synchronous=NORMAL trades durability for performance.
Only use in development or with acceptable data loss risk.
When to Use AIOSQLite¶
Ideal For:
✅ Async web applications (FastAPI, Litestar, Starlette)
✅ Async testing and development environments
✅ Embedded async applications (desktop, CLI tools)
✅ Prototyping async AI agent applications
✅ Moderate-concurrency async workloads
✅ Async data pipelines with moderate write frequency
Consider Alternatives When:
❌ High-concurrency production AI agent (many simultaneous writers)
❌ Need true async database parallelism (use asyncpg/asyncmy)
❌ Require server-based deployment with connection pooling
❌ Need advanced JSON indexing (use PostgreSQL JSONB)
❌ High-frequency write workloads across many connections
Comparison: AIOSQLite vs Other Backends¶
Feature |
AIOSQLite |
SQLite (sync) |
AsyncPG (PostgreSQL) |
|---|---|---|---|
Setup Complexity |
Zero config, embedded |
Zero config, embedded |
Requires PostgreSQL server |
Async Support |
Native async/await |
Sync with async wrapper |
Native async/await |
Concurrent Writes |
Single writer (SQLite limit) |
Single writer (SQLite limit) |
Excellent multi-writer |
Thread Pool |
Yes (aiosqlite executor) |
No (direct calls) |
No (native async I/O) |
Performance |
Good for moderate async |
Better for sync-only apps |
Best for high concurrency |
Deployment |
Single file |
Single file |
Client-server |
Best Use Case |
Async dev/testing |
Sync applications |
Production async apps |
When to Use AIOSQLite vs Sync SQLite¶
Use AIOSQLite When:
Your application is already async (FastAPI, Litestar, etc.)
You want to avoid blocking the event loop
Integration with other async libraries
Async testing frameworks (pytest-asyncio)
Use Sync SQLite When:
Your application is entirely synchronous
Lower overhead is critical (no thread pool switching)
Simpler deployment without async complexity
Legacy codebase without async support
Example: Full Application¶
See the complete runnable example:
This example demonstrates:
Async/await throughout the application
File-based and in-memory configurations
Session and event management
Proper async cleanup
WAL mode configuration
Troubleshooting¶
Database Locked Errors¶
Error:
sqlite3.OperationalError: database is locked
Causes:
Multiple writers attempting simultaneous writes
Long-running transactions holding locks
Lack of WAL mode in concurrent scenarios
Solutions:
# 1. Enable WAL mode (most important!)
async with config.provide_connection() as conn:
await conn.execute("PRAGMA journal_mode=WAL")
await conn.commit()
# 2. Increase timeout
config = AiosqliteConfig(
pool_config={"database": "./db.sqlite", "timeout": 30.0}
)
# 3. Use transaction batching (reduce write frequency)
Async Context Manager Issues¶
Error:
RuntimeError: Event loop is closed
Solution:
Ensure you’re using asyncio.run() or managing the event loop properly:
import asyncio
async def main():
config = AiosqliteConfig(pool_config={"database": ":memory:"})
store = AiosqliteADKStore(config)
await store.create_tables()
# Correct
asyncio.run(main())
# Incorrect (creates/closes loop improperly)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
loop.close() # May cause issues
Thread Safety Concerns¶
Issue: SQLite check_same_thread restriction
Solution: AIOSQLite handles this automatically via thread pool executor:
# check_same_thread=False is safe with aiosqlite
config = AiosqliteConfig(
pool_config={
"database": "./db.sqlite",
"check_same_thread": False # Safe with aiosqlite
}
)
Foreign Key Not Enforced¶
Issue: Foreign key constraints not working
Solution: The store automatically enables foreign keys per connection:
# Handled automatically by AiosqliteADKStore
await conn.execute("PRAGMA foreign_keys = ON")
# Verify foreign keys are enabled
async with config.provide_connection() as conn:
cursor = await conn.execute("PRAGMA foreign_keys")
result = await cursor.fetchone()
print(f"Foreign keys enabled: {result[0]}") # Should be 1
Best Practices¶
Enable WAL Mode Early¶
async def initialize_database(config: AiosqliteConfig):
"""Initialize database with optimal settings."""
async with config.provide_connection() as conn:
await conn.execute("PRAGMA journal_mode=WAL")
await conn.execute("PRAGMA synchronous=NORMAL")
await conn.execute("PRAGMA cache_size=-64000")
await conn.commit()
store = AiosqliteADKStore(config)
await store.create_tables()
return store
Use Connection Pooling¶
# Good: Reuse connection pool
config = AiosqliteConfig(pool_config={"database": "./db.sqlite", "pool_size": 5})
store = AiosqliteADKStore(config)
# All operations use the pool
await store.create_session(...)
await store.get_session(...)
Avoid Long-Running Transactions¶
# BAD: Long transaction blocks other writers
async with config.provide_connection() as conn:
await conn.execute("BEGIN")
# ... many operations ...
await asyncio.sleep(10) # Holding lock!
await conn.commit()
# GOOD: Short, focused transactions
async with config.provide_connection() as conn:
await conn.execute("BEGIN")
await conn.execute(...)
await conn.execute(...)
await conn.commit() # Quick commit
Graceful Cleanup¶
async def application_lifecycle():
config = AiosqliteConfig(pool_config={"database": "./db.sqlite"})
store = AiosqliteADKStore(config)
await store.create_tables()
try:
# Application logic
yield store
finally:
# Clean up connection pool
await config.close_pool()
Migration from Sync SQLite¶
Migrating from sync SQLite to AIOSQLite is straightforward:
# Before: Sync SQLite
from sqlspec.adapters.sqlite import SqliteConfig
from sqlspec.adapters.sqlite.adk import SqliteADKStore
def sync_app():
config = SqliteConfig(database="./db.sqlite")
store = SqliteADKStore(config)
# ... sync operations ...
# After: Async AIOSQLite
from sqlspec.adapters.aiosqlite import AiosqliteConfig
from sqlspec.adapters.aiosqlite.adk import AiosqliteADKStore
async def async_app():
config = AiosqliteConfig(pool_config={"database": "./db.sqlite"})
store = AiosqliteADKStore(config)
# ... async operations with await ...
Key Changes:
Import from
aiosqliteinstead ofsqliteAdd
async/awaitkeywordsUse
pool_configparameter (not direct kwargs)Use
asyncio.run()to execute
API Reference¶
- class sqlspec.adapters.aiosqlite.adk.AiosqliteADKStore[source]
Bases:
BaseAsyncADKStore[AiosqliteConfig]Aiosqlite ADK store using asynchronous SQLite driver.
Implements session and event storage for Google Agent Development Kit using SQLite via the asynchronous aiosqlite driver.
Provides: - Session state management with JSON storage (as TEXT) - Event history tracking with BLOB-serialized actions - Julian Day timestamps (REAL) for efficient date operations - Foreign key constraints with cascade delete - Efficient upserts using INSERT OR REPLACE
- Parameters:
config¶ (
AiosqliteConfig) – AiosqliteConfig with extension_config[“adk”] settings.
Example
from sqlspec.adapters.aiosqlite import AiosqliteConfig from sqlspec.adapters.aiosqlite.adk import AiosqliteADKStore
- config = AiosqliteConfig(
pool_config={“database”: “:memory:”}, extension_config={
- “adk”: {
“session_table”: “my_sessions”, “events_table”: “my_events”
}
}
) store = AiosqliteADKStore(config) await store.create_tables()
Notes
JSON stored as TEXT with SQLSpec serializers (msgspec/orjson/stdlib)
BOOLEAN as INTEGER (0/1, with None for NULL)
Timestamps as REAL (Julian day: julianday(‘now’))
BLOB for pre-serialized actions from Google ADK
PRAGMA foreign_keys = ON (enable per connection)
Configuration is read from config.extension_config[“adk”]
- __init__(config)[source]
Initialize Aiosqlite ADK store.
- Parameters:
config¶ (
AiosqliteConfig) – AiosqliteConfig instance.
Notes
Configuration is read from config.extension_config[“adk”]: - session_table: Sessions table name (default: “adk_sessions”) - events_table: Events table name (default: “adk_events”)
- async create_tables()[source]
Create both sessions and events tables if they don’t exist.
- Return type:
- async create_session(session_id, app_name, user_id, state, owner_id=None)[source]
Create a new session.
- Parameters:
- Return type:
- Returns:
Created session record.
Notes
Uses Julian Day for create_time and update_time. State is JSON-serialized before insertion.
- async get_session(session_id)[source]
Get session by ID.
- Parameters:
- Return type:
- Returns:
Session record or None if not found.
Notes
SQLite returns Julian Day (REAL) for timestamps. JSON is parsed from TEXT storage.
- async update_session_state(session_id, state)[source]
Update session state.
- Parameters:
- Return type:
Notes
This replaces the entire state dictionary. Updates update_time to current Julian Day.
- async list_sessions(app_name, user_id=None)[source]
List sessions for an app, optionally filtered by user.
- Parameters:
- Return type:
- Returns:
List of session records ordered by update_time DESC.
Notes
Uses composite index on (app_name, user_id) when user_id is provided.
- async delete_session(session_id)[source]
Delete session and all associated events (cascade).
Notes
Foreign key constraint ensures events are cascade-deleted.
- async append_event(event_record)[source]
Append an event to a session.
- Parameters:
event_record¶ (
EventRecord) – Event record to store.- Return type:
Notes
Uses Julian Day for timestamp. JSON fields are serialized to TEXT. Boolean fields converted to INTEGER (0/1/NULL).
- async get_events(session_id, after_timestamp=None, limit=None)[source]
Get events for a session.
- Parameters:
- Return type:
- Returns:
List of event records ordered by timestamp ASC.
Notes
Uses index on (session_id, timestamp ASC). Parses JSON fields and converts BLOB actions to bytes. Converts INTEGER booleans back to bool/None.
- property config: ConfigT
Return the database configuration.
- property events_table: str
Return the events table name.
- property owner_id_column_ddl: str | None
Return the full owner ID column DDL (or None if not configured).
- property owner_id_column_name: str | None
Return the owner ID column name only (or None if not configured).
- property session_table: str
Return the sessions table name.
See Also¶
Quick Start - Quick start guide
Database Adapters - Adapter comparison
Schema Reference - Database schema details
SQLite Backend - Sync SQLite backend (comparison)
DuckDB Backend - DuckDB backend (embedded OLAP alternative)
AIOSQLite Documentation - Official aiosqlite documentation
SQLite Documentation - SQLite reference
SQLite WAL Mode - Write-Ahead Logging explained