Simple KV storage on top of indexedDB
If your client-side application ever needs to persist a larger portion of data, it’s no longer suitable to put it inside a localStorage
entry. The first thing that comes into mind is to use indexedDB. But then you have to manage transactions, versioning, etc.
Sometimes all you need may be a straightforward key-value store, that hides those intricacies inside, like:
const kv = await openKV("kv"); await kv.set(key, value); const data = await kv.get(key);
What is stopping you from building one? Let’s get to work!
Opening up
Working with indexedDB API is not especially pleasant. It’s taken straight from the primal age of JavaScript. If you’re looking for a more civilized API to digest - check out idb
- it enhances the indexedDB API with promises and shortcuts for common operations
But in this post, we’re not afraid of the tears and pain of the past
Opening up (the database)
First, we need to open a new connection request. Then attach handlers for success
and error
events. Everything is enclosed within a compact Promise:
const STORE_NAME = "store"; const openKVDatabase = (dbName: string) => new Promise<IDBDatabase>((resolve, reject) => { const request = indexedDB.open(dbName); request.onsuccess = () => { resolve(request.result); }; request.onerror = () => { reject("indexedDB request error"); }; request.onupgradeneeded = () => { request.result.createObjectStore(STORE_NAME, { keyPath: "key" }); }; });
upgradeneeded
event will be fired once the database is created (or when its version gets updated). Inside this handler, we can create our one and only store - the KV store. I’ve put STORE_NAME
in a constant as we’ll need to use it in multiple places later on
First blood methods
Let’s scaffold a basic shape for get
, set
, and delete
methods. They will correspond to indexDB objectStore operations consequently: get
, put
, and delete
export async function openKV<T = unknown>(dbName: string) { const db = await openKVDatabase(dbName); const openStore = () => { return db.transaction(STORE_NAME, "readwrite").objectStore(STORE_NAME); }; return { async get(key: string) {}, async set(key: string, value: T) {}, async delete(key: string) {}, }; }
openStore
helper function opens up a new transaction and returns the handle for our KV store
Requests as promised
One more thing needs to be done before implementing the methods. objectStore
methods return IDBRequest object. This object achieves the same goal as a promise (it’s like a goofy version of it). Let’s create a utility that will map them into promises - so we can await
them:
function idbRequestToPromise<T>(request: IDBRequest<T>) { return new Promise<T>((resolve, reject) => { request.onsuccess = () => resolve(request.result); request.onerror = () => reject(request.error); }); }
Methods
async get(key: string): Promise<T | undefined> { const pair: Pair | undefined = await idbRequestToPromise( openStore().get(key) ); return pair?.value as T | undefined; }, async set(key: string, value: T) { const pair: Pair = { key, value }; await idbRequestToPromise(openStore().put(pair)); }, delete(key: string) { return idbRequestToPromise(openStore().delete(key)); },
The Pair
type used here is just:
type Pair<T = unknown> = { key: string; value: T };
You got to pump it up
As you probably noticed opening a new transaction every time we perform a single key value operation is suboptimal. Consider this snippet:
for (const item of arr) { kv.set(item.id, item); }
To handle an array of 1000 items, we need to open 1000 transactions. If the operations are executed in a single task (triggered synchronously) as in the example above, grouping them into a single transaction (aka batching) could improve efficiency. Let’s verify if this assumption holds true
Batching
To implement batching, we need to update the openStore
function a little bit
const db = await openKVDatabase(dbName); // Create 'store' variable to share it between calls let store: IDBObjectStore | null; const openStore = () => { if (!store) { store = db.transaction(STORE_NAME, "readwrite").objectStore(STORE_NAME); queueMicrotask(() => { // Finish the transaction after the current task ends store?.transaction?.commit(); store = null; }); } return store; };
queueMicrotask
allows running code after the current task has been executed (microtasks are run between regular tasks). Learn more here.
Testing
I used tinybench
to prepare a basic test case like so:
Promise.all(arr.map((v) => kv.set(v, v)));
Where arr
is a 1000-element array of strings
Results
Unsurprisingly there is a small improvement over the 1000 transaction version:
| 1000 transactions | batching | | ----------------- | ------------ | | 7 (ops/sec) | 32 (ops/sec) |
Transactions
Okay, so when I run queries synchronously, they will be put into a single transaction. But what about the original reason for inventing database transactions? It was to group queries together into one to ensure consistency. Check out this code:
async function inc() { await kv.set("x", (await kv.get("x")) + 1); }
It would only make sense if both set
and get
operations would form a single transaction
Async / await tracking
Unfortunately, APIs like AsyncLocalStorage
that are available in server runtimes including Node.js, Deno, and Bun, that would allow us to track async context are not (yet) available in the browsers. However, we can hook into async / await by leveraging custom Thenables
and microtasks scheduling…
If you are interested in learning more about tracking asynchronous contexts, you can check out proposal-async-context - the official ECMAScript proposal that addresses this particular issue
…and then
Userland Thenables
can be awaited just like Promises
. The key difference if it comes to Thenables
is that “_then_” method is always executed when used in async / await code. This allows us to intercept the async execution of the code and inject hooks before and after the continuation of the async / await block. Here’s my attempt at doing that:
export class Thenable<T> { constructor( private promise: Promise<T>, private hooks?: { before?: () => void; after?: () => void; } ) {} then<U>(onFulfilled: (value: T) => U): Thenable<U> { return new Thenable( this.promise.then((value) => { this.hooks?.before?.(); const result = onFulfilled(value); queueMicrotask(() => this.hooks?.after?.()); return result; }), this.hooks ); } }
Notice how after hook is pushed into the microtask queue. It’s because calling
onFulfilled
will push continuation to the queue itself - this way after hook is called after the continuation microtask
Sharing current transaction
Taking up before and after hooks, we can now make the current transaction accessible from within adjacent queries. Here’s the type of the transaction object that will be shared:
type Transaction = { store: IDBObjectStore; committed?: boolean; lastQueried?: boolean; };
commited
and lastQueried
flags will be used to implement auto-committing of the transaction. All queries will now be wrapped in a query
function to handle sharing.
const query = <R>( fn: (transaction: Transaction<T>) => Promise<R> ): Thenable<R> => { const transaction = (currentTransaction ??= { store: db.transaction(STORE_NAME, "readwrite").objectStore(STORE_NAME), }); // Clear current transaction after current task queueMicrotask(() => { currentTransaction = null; }); const result = fn(transaction); return new Thenable(result, { before() { // Resume transaction before the continuation currentTransaction = transaction; }, after() { currentTransaction = null; }, }); };
And the example of usage:
set(key: string, value: T) { // Wrap handler with query return query(async ({ store }) => { const pair = { key, value }; await idbRequestToPromise(store.put(pair)); }); },
Auto-commiting
After the series of queries, it would be great to handle commiting automatically. The lastQueried
flag will indicate if the queries were executed last microtasks:
const query = <R>( fn: (transaction: Transaction<T>) => Promise<R> ): Thenable<R> => { const transaction: Transaction<T> = (currentTransaction ??= { store: db.transaction(STORE_NAME, "readwrite").objectStore(STORE_NAME), }); // Running `query` will reset the flag transaction.lastQueried = true; queueMicrotask(() => { currentTransaction = null; }); const result = fn(transaction); return new Thenable(result, { before() { currentTransaction = transaction; transaction.lastQueried = false; }, after() { // If there were no new queries during the last microtask if (!transaction.lastQueried && !transaction.committed) { transaction.store.transaction.commit(); transaction.committed = true; } currentTransaction = null; }, }); };
Admiring the results
Look at that and think:
async function inc() { await kv.set("x", (await kv.get("x")) + 1); }
The function above now forms a single ACID transaction!
Summing Up
The indexedDB is not the easiest API to work with. It’s not the fastest horse in the stable either. It’s probably a good idea to use some of the popular wrapper libraries like idb
or Dexie.js
. That will simplify and streamline the process of working with it. There is also idbkeyval
- super-simple key-value store (but without automatic batching and transactions 🙊). Still, implementing your own wrapper may be great fun and will definitely help you understand better how it works