Nana Kwasi Asante

Software Engineer

IOT: The TTLOCK API

Last year I worked on a project that involved IoT smart locks. Here's what I learned along the way.

What is a smart lock?

A smart lock is a physical lock that can be controlled digitally — typically over WiFi or Bluetooth. Instead of only turning a key, you can lock and unlock it from your phone, set up access codes, grant temporary access to other people, and get notifications when the door is opened or closed.

Under the hood, a smart lock is just a regular lock with a motor and a small computer board. That board connects to the internet (or a local network) and listens for commands. The "smart" part is really just the software layer that sits on top.

What is TTLock?

TTLock is one of the more popular platforms in the smart lock space, especially for developers. They manufacture locks and also provide a cloud backend and an API that lets you control those locks programmatically.

The way it works is roughly this:

  1. A TTLock smart lock connects to a gateway — a small device that bridges the lock (which communicates over Bluetooth) to the internet.
  2. The gateway talks to TTLock's cloud servers.
  3. You interact with the lock through TTLock's API, sending commands like lock, unlock, or add access code, and the cloud relays them to the gateway, which relays them to the lock.

So as a developer, you never talk to the lock directly. You talk to the TTLock API, and the rest is handled for you. That's both the appeal and the source of most of the quirks I ran into.

What I was building

The project was an API that sat between our mobile app and the TTLock platform. Users could do two main things:

  • Remotely lock and unlock the door directly from the app.
  • Manage access codes — create, update, and delete codes that could be entered on the lock's keypad, also from within the app.

So the flow looked something like this: a user taps "unlock" in the mobile app, that hits our API, and our API calls the TTLock API to send the unlock command down to the lock. For access codes it was similar — the app would send a request to create a code, we'd pass that along to TTLock, and TTLock would push it to the lock via the gateway.

On top of that, we also had to keep our own database in sync with what was happening on the TTLock side. That meant pulling and syncing data for:

  • Locks — the lock devices themselves and their current state.
  • Gateways — the gateway devices and whether they were online and reachable.
  • Access codes — keeping our records of active codes in sync with what was actually on the lock.
  • Sensor data — things like whether the door was open or closed, battery levels, and other status updates from the lock.

This wasn't just a nice-to-have. If our database got out of step with the actual state of the lock, things would break — users would see stale data in the app, or worse, think a door was locked when it wasn't.

To keep our database fresh, we ran scheduled sync jobs at set intervals for each type of data. Locks synced every 2 minutes since their status is the most critical. Access codes and cards synced every 3 minutes, and gateways every 5 minutes since they change less often. We also staggered the intervals slightly so that all the syncs didn't hit the TTLock API at exactly the same time.

On top of the scheduled syncs, we took advantage of events — when a user actually did something in the app, like unlocking a door or creating an access code, that action went straight to the TTLock API in real time. No waiting for the next sync cycle. The scheduled syncs filled in the gaps and kept everything else up to date in the background.

The TTLock API does have rate limits though — you can only make so many requests in a given window before they start rejecting you. So we couldn't just fire off requests whenever we wanted. We used a queue system to manage this: sync jobs were pushed into a queue and processed by workers at a controlled rate, with automatic retries if something failed. That way we stayed within TTLock's limits without having to manually calculate how often we could call them.

On the surface it sounds straightforward. And it mostly is — until you start running into the less obvious parts of how TTLock works. That's what the rest of this post is about.

Setting up the TTLock API

Getting your credentials

Before you write any code, you need to get set up on the TTLock Open Platform. It's a bit more involved than you might expect — there's an approval process, and it can take a few days.

  1. Register a developer account on the TTLock Open Platform.
  2. Wait for approval — TTLock manually reviews new developer accounts. This can take a day or two.
  3. Create an application — once your account is approved, log in and create a new app. You'll need to provide a name, logo, description, and select "Web" as the type.
  4. Wait for application approval — your app also needs to be reviewed and approved, which can take a few more days.
  5. Retrieve your credentials — once approved, your clientId and clientSecret will be available in the application's details.

So in total you need four things to make API calls:

  • clientId — generated when your application is approved.
  • clientSecret — generated alongside the clientId. Keep this secure — never put it in client-side code.
  • username — your TTLock account username (the one you use in the mobile app).
  • password — your TTLock account password.

Getting an access token

The TTLock API uses OAuth2. Before you can make any API calls, you need to get an access token by hitting the token endpoint:

POST https://euapi.ttlock.com/oauth2/token

One thing to note: the password isn't sent as plain text. You need to MD5 hash it before sending. That's a quirk of the TTLock API that's easy to miss if you're not paying attention.

Here's how we did it in NestJS:

import { createHash } from 'crypto';

async getAccessToken(): Promise<TTLockAccessTokenResponse> {
  const hashedPassword = createHash('md5')
    .update(this.password)
    .digest('hex');

  const params = new URLSearchParams({
    clientId: this.clientId,
    clientSecret: this.clientSecret,
    username: this.username,
    password: hashedPassword,
  });

  const response = await firstValueFrom(
    this.httpService.post(`${this.baseUrl}/oauth2/token`, params.toString(), {
      headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
    }),
  );

  return response.data;
}

The response gives you back an access_token, a refresh_token, and an expires_in value telling you how long the token is valid.

Refreshing the token

Once your access token expires, you don't need to re-authenticate from scratch. You can use the refresh_token from the original response to get a new access token from the same endpoint:

async refreshAccessToken(refresh_token: string): Promise<TTLockAccessTokenResponse> {
  const params = new URLSearchParams({
    clientId: this.clientId,
    clientSecret: this.clientSecret,
    grant_type: 'refresh_token',
    refresh_token,
  });

  const response = await firstValueFrom(
    this.httpService.post(`${this.baseUrl}/oauth2/token`, params.toString(), {
      headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
    }),
  );

  return response.data;
}

In our setup, we handled token refresh on a schedule so it was always fresh when the rest of the code needed it. The service that owned the token was the only thing that knew about expiry and refresh — everything else just asked for the current token when making a request.

Making authenticated API calls

Once you have an access token, every request to the TTLock API needs to include it as a query parameter called accessToken. Most endpoints also require a date parameter — a 13-digit timestamp in milliseconds representing when the request was made.

Here's a helper we built to make authenticated requests:

async makeAuthenticatedRequest<T>(
  endpoint: string,
  additionalParams: Record<string, any> = {},
): Promise<T> {
  const token = await this.getValidToken(); // Gets current token or refreshes if expired

  const params = new URLSearchParams({
    accessToken: token,
    date: Date.now().toString(),
    ...additionalParams,
  });

  const response = await firstValueFrom(
    this.httpService.post(`${this.baseUrl}${endpoint}`, params.toString(), {
      headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
    }),
  );

  return response.data;
}

One thing to watch out for: TTLock returns a 200 OK response even when something goes wrong. The actual error is buried in the response body with an errcode field. A successful response has errcode: 0. Anything else is an error, and you need to check a separate table in their docs to figure out what each code means.

So we added error handling:

if (response.data.errcode !== 0) {
  throw new Error(
    `TTLock API error: ${response.data.errmsg || 'Unknown error'} (code: ${response.data.errcode})`
  );
}

Understanding the TTLock data model

Before diving into specific operations, it helps to understand how TTLock structures things.

Locks and Gateways

A lock is the physical device on the door. It communicates over Bluetooth, which means it has very limited range — maybe a few meters at most.

A gateway is what connects the lock to the internet. The gateway sits somewhere within Bluetooth range of the lock (usually mounted on a nearby wall), connects to WiFi, and acts as a bridge between the lock and the TTLock cloud.

When you send a command like "unlock" through the API, this is what happens:

  1. Your code calls the TTLock API
  2. TTLock's servers send the command to the gateway
  3. The gateway sends it to the lock over Bluetooth
  4. The lock executes the command
  5. The lock responds back through the gateway to TTLock's servers
  6. The API returns a response to your code

This chain means a few things:

  • The gateway must be online for remote operations to work
  • The lock must be in range of the gateway
  • There's latency — it's not instant, usually 1-3 seconds in practice
  • It can fail at any step — network issues, Bluetooth interference, dead batteries, etc.

Access codes and cards

TTLock locks support two main ways of granting access beyond the physical key:

  • Access codes (also called passcodes or keyboard passwords) — numeric codes entered on the lock's keypad
  • Access cards (also called IC cards or NFC cards) — physical cards you tap on the lock

Both can be:

  • Permanent — valid forever
  • Time-limited — valid between a start and end date/time
  • One-time — becomes invalid after first use
  • Cyclic — valid only during specific time windows (e.g., weekdays 9am-5pm)

Each code or card is tied to a specific lock. If you want the same code to work on multiple locks, you need to create it separately on each one. That's where master codes come in (more on that later).

Core operations

Listing locks

To get all locks associated with your TTLock account:

async getAllLocks(): Promise<TTLockListResponse> {
  return this.makeAuthenticatedRequest('/v3/lock/list', {
    pageNo: 1,
    pageSize: 100,
  });
}

The response includes an array of lock objects. Each lock has:

  • lockId — TTLock's unique identifier (numeric)
  • lockAlias — the name you gave the lock
  • electricQuantity — battery level (0-100)
  • lockMac — the lock's Bluetooth MAC address
  • Various other status fields and metadata

One thing we did early on: we stored locks in our own database with our own UUID primary key, and mapped that to TTLock's lockId. This gave us a stable identifier we could use in our API, even if TTLock's IDs changed or we wanted to switch providers later.

Listing gateways

Similar to locks:

async getAllGateways(): Promise<TTLockGatewayListResponse> {
  return this.makeAuthenticatedRequest('/v3/gateway/list', {
    pageNo: 1,
    pageSize: 100,
  });
}

The gateway response tells you whether each gateway is online (isOnline: 1 or 0) and which locks it's connected to. This is critical information — if a gateway goes offline, you can't send remote commands to any of its locks.

Unlocking and locking

To remotely unlock a lock:

async remoteUnlock(lockId: number): Promise<void> {
  await this.makeAuthenticatedRequest('/v3/lock/unlock', {
    lockId: lockId.toString(),
  });
}

And to lock it:

async remoteLock(lockId: number): Promise<void> {
  await this.makeAuthenticatedRequest('/v3/lock/lock', {
    lockId: lockId.toString(),
  });
}

These are synchronous operations — the API call waits for the lock to respond before returning. If it succeeds, you get back errcode: 0. If it fails (lock out of range, gateway offline, lock battery dead, etc.), you get an error code.

In practice, we found that unlock operations were fairly reliable if the gateway and lock were both healthy. Lock operations (closing the door) were sometimes flakier because they depend on the door being properly aligned.

Checking lock state

To check whether a lock is currently locked or unlocked:

async queryLockState(lockId: number): Promise<{ state: number }> {
  const response = await this.makeAuthenticatedRequest('/v3/lock/queryOpenState', {
    lockId: lockId.toString(),
  });

  return { state: response.state }; // 0 = locked, 1 = unlocked, 2 = unknown
}

Note that this queries the lock in real-time through the gateway. It's not cached — you're actually pinging the lock. So it has the same requirements as unlock/lock (gateway must be online, lock must be in range).

Managing access codes

Listing codes for a lock:

async getPasscodes(lockId: number, pageNo = 1, pageSize = 100): Promise<TTLockPasscodeListResponse> {
  return this.makeAuthenticatedRequest('/v3/lock/listKeyboardPwd', {
    lockId: lockId.toString(),
    pageNo: pageNo.toString(),
    pageSize: pageSize.toString(),
  });
}

Creating a new access code (TTLock generates a random code):

async generatePasscode(
  lockId: number,
  keyboardPwdName: string,
  keyboardPwdType: number, // 1=permanent, 2=one-time, 3=period, 9=cyclic
  startDate?: number,
  endDate?: number,
): Promise<TTLockPasscodeResponse> {
  const params: Record<string, string> = {
    lockId: lockId.toString(),
    keyboardPwdName,
    keyboardPwdType: keyboardPwdType.toString(),
    addType: '2', // 1=via Bluetooth, 2=via gateway (remote)
  };

  if (startDate) params.startDate = startDate.toString();
  if (endDate) params.endDate = endDate.toString();

  return this.makeAuthenticatedRequest('/v3/keyboardPwd/get', params);
}

The addType parameter is important — it determines whether the code is added via a direct Bluetooth connection (1) or remotely through the gateway (2). For most API use cases, you want 2.

Creating a custom access code (you choose the code):

async addCustomPasscode(
  lockId: number,
  keyboardPwd: string, // 4-9 digits
  keyboardPwdName: string,
  addType: number,
  startDate?: number,
  endDate?: number,
): Promise<TTLockPasscodeResponse> {
  const params: Record<string, string> = {
    lockId: lockId.toString(),
    keyboardPwd,
    keyboardPwdName,
    addType: addType.toString(),
  };

  if (startDate) params.startDate = startDate.toString();
  if (endDate) params.endDate = endDate.toString();

  return this.makeAuthenticatedRequest('/v3/keyboardPwd/add', params);
}

Deleting an access code:

async deletePasscode(
  lockId: number,
  keyboardPwdId: number,
  deleteType: number, // 1=via Bluetooth, 2=via gateway
): Promise<void> {
  await this.makeAuthenticatedRequest('/v3/keyboardPwd/delete', {
    lockId: lockId.toString(),
    keyboardPwdId: keyboardPwdId.toString(),
    deleteType: deleteType.toString(),
  });
}

Managing access cards

Cards work very similarly to codes. You list them, add them, and delete them using nearly identical patterns:

async getCards(lockId: number, pageNo = 1, pageSize = 100): Promise<TTLockCardListResponse> {
  return this.makeAuthenticatedRequest('/v3/lock/listCard', {
    lockId: lockId.toString(),
    pageNo: pageNo.toString(),
    pageSize: pageSize.toString(),
  });
}

async addCard(
  lockId: number,
  cardNumber: string,
  cardName: string,
  startDate: number,
  endDate: number,
  addType: number, // 1=Bluetooth, 2=remote
): Promise<TTLockCardResponse> {
  return this.makeAuthenticatedRequest('/v3/card/add', {
    lockId: lockId.toString(),
    cardNumber,
    cardName,
    startDate: startDate.toString(),
    endDate: endDate.toString(),
    addType: addType.toString(),
  });
}

The sync problem

So far everything I've described is synchronous and real-time. When a user taps "unlock" in the app, we call the TTLock API immediately and wait for a response. That works fine for user-initiated actions.

But there's a bigger problem: how do you keep your database in sync with what's actually on the locks?

Consider these scenarios:

  • Someone uses the TTLock mobile app to create a new access code. Your API doesn't know about it.
  • A lock's battery drops to 10%. Your database still shows 60%.
  • A gateway goes offline. Your app thinks remote unlock will work, but it won't.
  • Someone deletes a lock from their TTLock account. Your database still has it.

If you only make API calls when users explicitly do something in your app, your database will quickly drift out of sync with reality. Users will see stale data, and worse, your app might make bad decisions based on incorrect information.

The solution is to periodically pull data from TTLock and update your database. But how often? And what data?

What we synced and how often

We ran scheduled sync jobs for four types of data:

  • Locks — every 2 minutes

    • Battery levels, lock state, online status, etc.
    • This is the most critical because it affects whether users can actually unlock doors
  • Access codes — every 3 minutes

    • New codes, updated codes, deleted codes
    • Codes change frequently as people grant and revoke access
  • Access cards — every 3 minutes (offset by 30 seconds from codes)

    • Similar to codes but less common in practice
  • Gateways — every 5 minutes

    • Online/offline status, connected locks
    • Changes less frequently than locks themselves

The intervals were based on two things:

  1. How critical the data is — locks are the most important, gateways the least
  2. Rate limits — TTLock throttles requests, so we couldn't sync everything every 30 seconds even if we wanted to

We also offset the sync schedules slightly (e.g., codes at 3 minutes, cards at 3 minutes + 30 seconds) so they didn't all fire at exactly the same time and overwhelm either our system or TTLock's.

The naive approach (and why it doesn't work)

The simplest way to sync would be:

@Cron('*/2 * * * *') // Every 2 minutes
async syncLocks() {
  const locks = await this.ttlockService.getAllLocks();

  for (const lock of locks.list) {
    await this.lockRepository.upsert(lock);
  }
}

This works for a toy project, but falls apart quickly in production:

Problem 1: It's slow

If you have 100 locks and you upsert them one at a time, that's 100 database round-trips. Even at 10ms per query, that's 1 second just for the database writes. Add in the TTLock API call time (often 1-3 seconds) and you're looking at 3-4 seconds per sync.

When you're syncing every 2 minutes, that's fine. But when TTLock is slow or you have 500 locks, it starts backing up.

Problem 2: What if TTLock is down?

If the TTLock API times out or returns errors, the whole sync job fails. You retry a few times, waste resources, and still don't make progress.

Worse, if TTLock is having a bad day and every request is failing, you're hammering them with retries, making the problem worse for everyone.

Problem 3: It doesn't scale

When you add sync jobs for codes, cards, and gateways, you have four different cron jobs all making API calls and database writes. There's no coordination between them, no shared retry logic, and no way to see what's actually happening across all the syncs.

The solution: queues, circuit breakers, and batch processing

We redesigned sync around three core ideas:

  1. Queues — sync jobs are tasks that get added to a queue and processed by workers
  2. Circuit breaker — when TTLock is failing, stop calling it and fail fast instead of retrying
  3. Batch processing — write to the database in chunks, not one row at a time

Let me walk through each.

Queues (BullMQ + Redis)

Instead of running sync logic directly in the cron job, the cron job just adds a task to a queue:

@Cron('*/2 * * * *')
async scheduleLockSync() {
  await this.lockSyncQueue.add('sync-locks', {}, {
    attempts: 3,
    backoff: { type: 'exponential', delay: 2000 },
  });
}

The actual work is done by a processor (worker) that pulls tasks from the queue:

@Processor('lock-sync-queue')
export class LockSyncProcessor {
  @Process('sync-locks')
  async handleLockSync(job: Job) {
    this.logger.log('Starting lock sync job');

    try {
      const locks = await this.ttlockService.getAllLocks();
      await this.batchService.batchUpsertLocks(locks.list);

      this.logger.log(`Synced ${locks.list.length} locks`);
    } catch (error) {
      this.logger.error('Lock sync failed', error);
      throw error; // Let BullMQ handle retry
    }
  }
}

Why is this better?

Decoupling

The cron job doesn't care whether the sync succeeds or fails. It just adds a task and moves on. The processor does the actual work, and if it fails, BullMQ handles retries automatically with exponential backoff.

Automatic retries

We configured each queue to retry failed jobs up to 3 times with exponential backoff (2 seconds, then 4 seconds, then 8 seconds). So transient errors (network blips, TTLock hiccups) get retried automatically without us writing custom retry logic.

Rate limiting

BullMQ lets you control how fast workers process jobs. We set a limit of 10 jobs per second across all sync queues. This keeps us safely under TTLock's rate limits and prevents us from overwhelming our own database during heavy sync periods.

Observability

BullMQ tracks job state (waiting, active, completed, failed) and stores job history in Redis. We can see exactly how many sync jobs succeeded or failed, how long they took, and what errors occurred — all without custom logging or monitoring.

Parallelism

We can run multiple workers (processors) for the same queue if needed. So if lock syncs start taking longer, we can scale horizontally by adding more workers without changing any code.

Circuit breaker

Even with queues and retries, there's still a problem: what if TTLock goes down for an extended period?

When TTLock is completely unreachable (server outage, network issue, their API being hammered), every sync job will:

  1. Try to call the TTLock API
  2. Wait for the request to time out (10-30 seconds)
  3. Fail and retry
  4. Repeat 2-3 more times
  5. Finally give up

If you have 4 sync types (locks, codes, cards, gateways) running every few minutes, you quickly end up with dozens of jobs all timing out and retrying simultaneously. Your workers get backed up, Redis fills with failed jobs, and you're spending all your resources on requests that you already know will fail.

The solution is a circuit breaker.

How it works

A circuit breaker sits between your code and the external API (TTLock in this case). It tracks how many requests are failing, and if failures exceed a threshold, it opens the circuit — meaning it stops sending requests entirely.

The circuit breaker has three states:

CLOSED (normal)

  • All requests go through to TTLock
  • On success: reset the failure counter
  • On failure: increment the failure counter
  • If failures exceed the threshold (e.g., 5 consecutive failures), trip to OPEN

OPEN (failing)

  • All requests are rejected immediately without calling TTLock
  • Return an error like "Circuit breaker is OPEN. TTLock API is currently unavailable."
  • After a cooldown period (e.g., 60 seconds), transition to HALF_OPEN

HALF_OPEN (testing)

  • Allow a small number of requests through to test if TTLock has recovered
  • On success: increment success counter; after N successes (e.g., 2), transition to CLOSED
  • On failure: immediately trip back to OPEN and reset the cooldown

Here's how we implemented it:

export class CircuitBreakerService {
  private state: 'CLOSED' | 'OPEN' | 'HALF_OPEN' = 'CLOSED';
  private failureCount = 0;
  private successCount = 0;
  private nextAttemptTime: number | null = null;

  private readonly failureThreshold = 5;
  private readonly successThreshold = 2;
  private readonly timeoutMs = 60000; // 1 minute

  async execute<T>(fn: () => Promise<T>): Promise<T> {
    if (this.state === 'OPEN') {
      if (Date.now() < this.nextAttemptTime) {
        throw new Error(
          `Circuit breaker is OPEN. TTLock API is currently unavailable. ` +
          `Retry in ${Math.ceil((this.nextAttemptTime - Date.now()) / 1000)}s.`
        );
      }
      // Timeout elapsed, move to HALF_OPEN
      this.state = 'HALF_OPEN';
      this.successCount = 0;
    }

    try {
      const result = await fn();
      this.onSuccess();
      return result;
    } catch (error) {
      this.onFailure();
      throw error;
    }
  }

  private onSuccess() {
    if (this.state === 'HALF_OPEN') {
      this.successCount++;
      if (this.successCount >= this.successThreshold) {
        this.state = 'CLOSED';
        this.failureCount = 0;
        this.logger.log('Circuit breaker closed - TTLock API recovered');
      }
    } else if (this.state === 'CLOSED') {
      this.failureCount = 0;
    }
  }

  private onFailure() {
    if (this.state === 'HALF_OPEN') {
      this.tripCircuit();
    } else if (this.state === 'CLOSED') {
      this.failureCount++;
      if (this.failureCount >= this.failureThreshold) {
        this.tripCircuit();
      }
    }
  }

  private tripCircuit() {
    this.state = 'OPEN';
    this.failureCount = 0;
    this.nextAttemptTime = Date.now() + this.timeoutMs;
    this.logger.warn(
      `Circuit breaker opened - TTLock API is unavailable. ` +
      `Will retry in ${this.timeoutMs / 1000}s.`
    );
  }
}

In the sync processors, we wrap TTLock calls with the circuit breaker:

async handleLockSync(job: Job) {
  try {
    const locks = await this.circuitBreaker.execute(() =>
      this.ttlockService.getAllLocks()
    );

    await this.batchService.batchUpsertLocks(locks.list);
  } catch (error) {
    // If circuit is open, job fails immediately
    // BullMQ will retry later when circuit might be closed
    throw error;
  }
}

What this achieves

Fail fast

When TTLock is down, we know within 5 failed attempts (a few seconds) and stop wasting time on timeouts. Failed jobs fail immediately and can retry later.

Reduced load

We're not hammering TTLock with hundreds of doomed requests while they're already having issues. This is good internet citizenship.

Automatic recovery

When TTLock comes back up, the circuit breaker detects it (via the HALF_OPEN test requests) and automatically resumes normal operation. No manual intervention needed.

Better observability

We expose the circuit breaker state via a monitoring endpoint:

@Get('circuit-breaker/status')
getCircuitBreakerStatus() {
  return {
    state: this.circuitBreaker.getState(),
    failureCount: this.circuitBreaker.getFailureCount(),
    nextAttemptTime: this.circuitBreaker.getNextAttemptTime(),
  };
}

So operators can see at a glance whether sync is failing due to TTLock being down, and when the next retry will happen.

Important: Circuit breaker is only for sync

One critical detail: we only use the circuit breaker for scheduled sync jobs, not for user-initiated actions.

When a user taps "unlock" in the app, we call the TTLock API directly without the circuit breaker. Why?

  • User expectations: When a user does something, they expect an immediate response. If TTLock is slow or down, we want to surface that error to the user right away ("Unable to unlock door — please try again"). We don't want to return a cached "circuit breaker is open" error that doesn't reflect what's happening right now.

  • Real-time operations are rare: Users aren't unlocking doors every second. A few unlock attempts hitting a failing API won't cause cascading failures.

  • Sync is high volume: Sync jobs run automatically every few minutes for potentially hundreds of locks. Without a circuit breaker, they absolutely will cause problems when TTLock is down.

So the rule is: background sync uses the circuit breaker; real-time API calls don't.

Batch processing

The last piece of the sync puzzle is how we write data to the database.

When syncing locks, we might pull back 100, 500, or even 1,000+ lock records from TTLock. If we insert or update them one at a time:

for (const lock of locks) {
  await this.lockRepository.upsert(lock);
}

...that's potentially 1,000 database round-trips. Even with a fast database, that adds up.

The solution is to batch the writes. Instead of upserting one row at a time, we upsert them in chunks of (for example) 50 rows at once.

Here's how we implemented it:

async batchUpsertLocks(locks: TTLockData[], chunkSize = 50): Promise<void> {
  const chunks = this.chunkArray(locks, chunkSize);

  for (const chunk of chunks) {
    await this.dataSource.transaction(async (manager) => {
      await manager
        .createQueryBuilder()
        .insert()
        .into(Lock)
        .values(chunk.map(lock => this.mapTTLockToEntity(lock)))
        .orUpdate(
          ['status', 'batteryLevel', 'metadata', 'lastRefreshed'],
          ['lockId'], // conflict target (TTLock's ID)
        )
        .execute();
    });
  }
}

private chunkArray<T>(array: T[], size: number): T[][] {
  const chunks: T[][] = [];
  for (let i = 0; i < array.length; i += size) {
    chunks.push(array.slice(i, i + size));
  }
  return chunks;
}

This does a few things:

Chunking

We split the array of locks into chunks of 50. Then we process each chunk in its own database transaction.

Upsert (insert or update)

We use PostgreSQL's ON CONFLICT ... DO UPDATE (via TypeORM's orUpdate) to say: "Insert this lock if it doesn't exist. If it already exists (based on lockId), update these specific fields instead."

This is key for sync — we might be syncing a mix of new locks and existing locks, and we want to handle both without separate logic.

Transactions

Each chunk is processed in a transaction. If any row in the chunk fails to write (e.g., a validation error), the whole chunk is rolled back. This prevents partial writes that could leave the database in an inconsistent state.

Mapping

We transform TTLock's data format into our own entity format via mapTTLockToEntity. This is where we:

  • Generate our own UUIDs for new locks
  • Extract critical fields (battery level, status, alias) from TTLock's response
  • Store the full TTLock response in a metadata JSONB column for reference

Why this is faster

Instead of 1,000 individual inserts, we do 20 batch inserts (1,000 ÷ 50). That's a 50x reduction in database round-trips.

In practice, syncing 500 locks went from taking 8-10 seconds (one-by-one) to under 1 second (batched). For codes and cards (which are per-lock, so volumes are even higher), the improvement was even more dramatic.

Conflict resolution

There's one more complication: what happens when both your API and TTLock have changed the same thing?

Example:

  1. Your API renames a lock to "Front Door"
  2. Someone uses the TTLock mobile app to rename it to "Main Entrance"
  3. Your next sync pulls down "Main Entrance" from TTLock

Do you overwrite your local change? Do you keep your version and ignore TTLock's? Do you try to merge them?

This is a conflict, and you need a strategy to resolve it.

Our strategy: cloud-wins

We chose a cloud-wins strategy: TTLock is always the source of truth. When there's a conflict, we take TTLock's version and overwrite our local data.

Why?

  • TTLock is the actual hardware: The lock itself is programmed with whatever TTLock's cloud has. If there's a mismatch, the lock reflects TTLock's data, not ours.
  • Multiple clients: Users might be using both our app and TTLock's app. TTLock's app directly reflects their cloud, so we want our app to match.
  • Simplicity: "Cloud-wins" is easy to reason about and doesn't require complex merge logic.

Detecting conflicts

Before we upsert data from TTLock, we check if there's a conflict:

async detectConflicts(
  entityType: string,
  incomingData: any[],
): Promise<Conflict[]> {
  const conflicts: Conflict[] = [];

  for (const cloudData of incomingData) {
    const localEntity = await this.findLocalEntity(entityType, cloudData.id);

    if (!localEntity) continue; // New entity, no conflict

    // Was the local entity modified after our last sync?
    if (localEntity.updatedAt > localEntity.lastRefreshed) {
      // Yes — we have local changes

      // Is the cloud data different from what we have?
      if (!this.isEqual(localEntity.metadata, cloudData)) {
        // Yes — both local and cloud changed = conflict
        conflicts.push({
          entityType,
          entityId: localEntity.id,
          localValue: localEntity.metadata,
          cloudValue: cloudData,
          detectedAt: new Date(),
        });
      }
    }
  }

  return conflicts;
}

We track:

  • updatedAt — when the entity was last modified locally
  • lastRefreshed — when we last synced from TTLock

If updatedAt > lastRefreshed, we know there's a local change that hasn't been synced yet. If the cloud data is also different, we have a conflict.

Resolving conflicts

When we detect a conflict, we:

  1. Log it to a sync_conflicts table for audit purposes
  2. Apply the cloud version (cloud-wins)
  3. Update lastRefreshed to mark the entity as synced
async resolveConflict(conflict: Conflict): Promise<void> {
  // Log the conflict
  await this.conflictRepository.save({
    entityType: conflict.entityType,
    entityId: conflict.entityId,
    localValue: conflict.localValue,
    cloudValue: conflict.cloudValue,
    resolutionStrategy: 'CLOUD_WINS',
    resolvedAt: new Date(),
  });

  // Apply cloud version
  await this.applyCloudData(conflict.entityType, conflict.cloudValue);

  this.logger.warn(
    `Conflict resolved for ${conflict.entityType} ${conflict.entityId} - ` +
    `cloud version applied`
  );
}

This means users might occasionally see their changes "disappear" if they made them locally and someone else (or TTLock's app) made a different change at nearly the same time. In practice this was rare, and when it did happen, the audit log gave us visibility into what changed and why.

Putting it all together

Here's the full flow for a lock sync:

  1. Cron job fires every 2 minutes
  2. Adds a sync-locks task to the queue (BullMQ)
  3. Processor (worker) picks up the task
  4. Processor wraps the TTLock API call in the circuit breaker:
    • If circuit is OPEN → fail immediately (retry later)
    • If circuit is CLOSED or HALF_OPEN → make the request
  5. TTLock API returns a list of locks (or fails, incrementing circuit breaker failure count)
  6. Processor calls conflict detection service
    • Compares cloud data to local database
    • Identifies conflicts (local and cloud both changed)
    • Logs conflicts to sync_conflicts table
  7. Processor calls batch upsert service
    • Chunks locks into groups of 50
    • For each chunk: upsert to PostgreSQL in a transaction
    • Applies cloud-wins resolution for conflicts
  8. Processor updates sync metadata table:
    • lastSyncStatus: 'SUCCESS'
    • recordsSynced: 237
    • lastSyncEnd: <timestamp>
  9. Job completes

If step 5 fails (TTLock error), the circuit breaker increments its failure count. After 5 consecutive failures, it trips to OPEN. Future sync jobs fail immediately at step 4, and the circuit stays OPEN for 60 seconds before transitioning to HALF_OPEN to test recovery.

If step 7 fails (database error), the job fails, and BullMQ retries it after 2 seconds (then 4, then 8) up to 3 times total.

This architecture gave us:

  • Fast syncs (1-2 seconds for hundreds of records)
  • Resilience (automatic retries, circuit breaker for extended outages)
  • Observability (queue status, circuit breaker state, sync metadata, conflict logs)
  • Scalability (horizontal worker scaling, rate limiting, batching)

Master codes

One last feature worth mentioning: master codes.

A master code is an access code that should exist on every lock. For example, a facilities manager might want one code that opens all doors in a building.

The naive approach:

async createMasterCode(code: string, name: string) {
  const locks = await this.getAllLocks();

  for (const lock of locks) {
    await this.ttlockService.addCustomPasscode(
      lock.lockId,
      code,
      name,
      2, // remote
    );
  }
}

This works, but it's slow (sequential API calls) and fragile (if one lock fails, the others don't get the code).

We solved it with the queue system again:

async createMasterCode(code: string, name: string): Promise<MasterCode> {
  // 1. Save the master code entity in our database
  const masterCode = await this.masterCodeRepository.save({
    keyboardPwd: code,
    keyboardPwdName: name,
    syncStatus: 'PENDING',
    progress: 0,
  });

  // 2. Get all locks
  const locks = await this.lockRepository.find();

  // 3. Enqueue one job per lock to add the master code
  const jobs = locks.map(lock => ({
    name: 'add-master-code',
    data: {
      masterCodeId: masterCode.id,
      lockId: lock.id,
      lockTTLockId: lock.lockId,
      keyboardPwd: code,
      keyboardPwdName: name,
    },
    opts: {
      attempts: 3,
      backoff: { type: 'exponential', delay: 2000 },
    },
  }));

  await this.masterCodeQueue.addBulk(jobs);

  return masterCode;
}

Then the processor handles each job:

@Process('add-master-code')
async handleAddMasterCode(job: Job) {
  const { masterCodeId, lockTTLockId, keyboardPwd, keyboardPwdName } = job.data;

  try {
    // Add the code to this specific lock
    await this.ttlockService.addCustomPasscode(
      lockTTLockId,
      keyboardPwd,
      keyboardPwdName,
      2, // remote
    );

    // Update progress
    await this.updateMasterCodeProgress(masterCodeId, 'SUCCESS');
  } catch (error) {
    await this.updateMasterCodeProgress(masterCodeId, 'FAILED');
    throw error;
  }
}

async updateMasterCodeProgress(masterCodeId: string, status: string) {
  const masterCode = await this.masterCodeRepository.findOne(masterCodeId);

  if (status === 'SUCCESS') {
    masterCode.successCount++;
  } else {
    masterCode.failureCount++;
  }

  const total = masterCode.successCount + masterCode.failureCount;
  const totalLocks = await this.lockRepository.count();
  masterCode.progress = (total / totalLocks) * 100;

  if (total === totalLocks) {
    masterCode.syncStatus =
      masterCode.failureCount === 0 ? 'COMPLETED' : 'PARTIAL';
  }

  await this.masterCodeRepository.save(masterCode);
}

This means:

  • Master code creation is non-blocking — the API returns immediately with the masterCode entity
  • Jobs are processed in parallel by multiple workers (up to our concurrency limit)
  • Progress is tracked — the MasterCode entity shows how many locks succeeded/failed
  • Failures are isolated — if adding the code to Lock A fails, it doesn't affect Lock B
  • Retries are automatic — BullMQ retries failed jobs

The client (mobile app) polls our API to track progress:

@Get('master-codes/:id')
async getMasterCode(@Param('id') id: string) {
  return this.masterCodeRepository.findOne(id);
}

Response:

{
  "id": "abc-123",
  "keyboardPwd": "123456",
  "keyboardPwdName": "Master Code",
  "syncStatus": "IN_PROGRESS",
  "progress": 47.5,
  "successCount": 19,
  "failureCount": 1,
  ...
}

So the user sees a progress bar in the app while the master code is being pushed to all locks in the background.

Lessons learned

After a few months of running this, here are the biggest takeaways:

1. Treat the TTLock API as unreliable

It's not that TTLock is particularly bad — most third-party APIs have issues sometimes. The key is to assume the API will fail and build around that.

  • Use circuit breakers to fail fast when it's down
  • Use queues and retries for non-critical operations
  • Cache everything you can in your own database
  • Don't block user actions on API calls unless absolutely necessary

2. Sync is harder than it looks

Keeping two databases in sync (yours and TTLock's) is conceptually simple but has a lot of edge cases:

  • What if both sides change the same data?
  • What if TTLock deletes something you still have?
  • What if your sync job fails halfway through?
  • How do you know when you're out of sync?

You need conflict detection, conflict resolution strategies, transactional writes, and good observability. Don't underestimate this.

3. Batch everything

Whether it's API calls or database writes, doing things one-at-a-time doesn't scale. Batch your writes, parallelize your requests (within rate limits), and use transactions to ensure consistency.

4. Queues are worth the complexity

Adding BullMQ and Redis adds operational complexity (one more thing to run and monitor). But the benefits are huge:

  • Automatic retries
  • Rate limiting
  • Job history and observability
  • Horizontal scaling
  • Decoupling sync from cron jobs

If you're building anything beyond a prototype, use a queue system.

5. Monitor everything

We exposed endpoints for:

  • Circuit breaker state
  • Queue status (waiting, active, completed, failed)
  • Sync metadata (last run time, record counts, errors)
  • Conflict history

When things go wrong (and they will), you need visibility into what's happening. Build monitoring in from the start.

6. Read TTLock's docs carefully (and then test anyway)

TTLock's API documentation is... okay. It covers the basics, but there are quirks and undocumented behaviors:

  • Some endpoints return 200 even when they fail (check errcode)
  • Error codes aren't always documented
  • Some fields are nullable in unexpected ways
  • Rate limits are vague ("don't call us too often")

Test everything thoroughly, and be prepared to reverse-engineer behavior through trial and error.

7. IDs are tricky

We made the decision early to use our own UUIDs for everything and map to TTLock's IDs internally. This was the right call — it gave us a stable API surface and made it easier to reason about relationships in our database.

But you have to be careful to use the right ID at the right time. Our API uses UUIDs in URLs, but TTLock API calls require TTLock's numeric IDs. We built helper methods to encapsulate this, and it prevented a lot of bugs.

Wrapping up

Building on top of TTLock (or any IoT platform) is a mix of straightforward API integration and solving distributed systems problems — keeping data in sync, handling failures gracefully, and building observability into your system.

The hardest parts weren't the lock operations themselves (unlock, create code, etc.). Those are just HTTP requests. The hard parts were:

  • Keeping our database synced with TTLock's without hammering their API
  • Handling TTLock outages without cascading failures
  • Resolving conflicts when both sides changed data
  • Scaling sync to hundreds of locks and thousands of access codes

Queues, circuit breakers, batch processing, and conflict resolution strategies solved these problems. If you're building something similar, I'd recommend starting with those patterns from day one. They're much easier to build in from the start than to retrofit later.

And most importantly: don't trust any external API to always be available. Build for failure, and you'll have a much more resilient system.


If you have questions or want to discuss IoT integrations, feel free to reach out — I'm happy to chat about the details.

© 2026 Nana Kwasi Asante