Pretext: The DOM-Free Text Measurement Library That AI Coding Agents Are Already Using

Pretext: The DOM-Free Text Measurement Library That AI Coding Agents Are Already Using

lschvn7 min read

A new library showed up on npm on March 29 with zero announcement and already hundreds of downloads: Pretext (@chenglou/pretext), a pure JavaScript and TypeScript library for multiline text measurement and layout β€” without ever touching the DOM.

The author is Cheng Lou, previously known for work on React and ReasonML. The concept is clean: measure text the way browsers do, using the browser's own font engine as ground truth, but entirely through canvas β€” no getBoundingClientRect, no offsetHeight, no layout reflow.

Why This Matters: The Reflow Problem

Every front-end developer has hit this wall: you need to know how tall a block of text will be before rendering it. The traditional answer is to render it, measure it, then adjust. That triggers layout reflow β€” one of the most expensive operations in the browser. For a single label it's fine. For a list of 10,000 messages, a virtualized scroll, or an AI agent generating UI dynamically, it's a disaster.

Pretext sidesteps this entirely. It measures text using a hidden canvas and the browser's own measureText() API, which uses the same font engine that the DOM uses. The measurement is accurate because it's using real browser typography β€” but it's happening off-screen, without triggering layout at all.

import { prepare, layout } from '@chenglou/pretext'

// One-time preparation (done once per text+font combination)
const prepared = prepare('AGI spring is here. μ‹œμž‘ν–ˆλ‹€ πŸš€', '16px Inter')

// Hot path: pure arithmetic, no DOM involved
const { height, lineCount } = layout(prepared, textWidth, 20)

prepare() does the one-time work: normalizing whitespace, segmenting text, applying glue rules, measuring segments with canvas. layout() after that is roughly 0.09ms for 500 texts on the current benchmark. That's sub-millisecond.

Two Use Cases, One Library

1. Measure Without Touching DOM

The primary use case: know your text height before rendering. This unlocks:

  • Virtualization without guesstimates: Render only what fits in the viewport, measure the rest ahead of time
  • CLS (Cumulative Layout Shift) prevention: Pre-measure text before it loads so you can reserve the right space and keep scroll position stable
  • Development-time overflow detection: AI coding agents can verify that a button label won't wrap to two lines before the code even runs
  • Fancy userland layouts: Masonry, custom flexbox implementations, layouts that nudge values without CSS hacks
// Detect overflow before it happens
const { height } = layout(prepared, buttonWidth, buttonLineHeight)
if (height > buttonMaxHeight) {
  // Truncate, tooltip, or reflow
}

2. Manual Line Layout

If you need the actual line contents β€” for canvas/SVG rendering, for text wrapping around floats, or for building custom renderers β€” Pretext provides lower-level APIs:

import { prepareWithSegments, layoutWithLines } from '@chenglou/pretext'

const prepared = prepareWithSegments('AGI spring is here. μ‹œμž‘ν–ˆλ‹€ πŸš€', '18px "Helvetica Neue"')
const { lines } = layoutWithLines(prepared, 320, 26) // 320px max width, 26px line height

for (let i = 0; i < lines.length; i++) {
  ctx.fillText(lines[i].text, 0, i * 26)
}

The walkLineRanges() variant never even builds line strings β€” it calls a callback for each line with its width and cursor positions. This enables binary searches over layout dimensions, shrink-wrap containers, and balanced text without string allocation.

// Find the tightest width that fits all text
let maxW = 0
walkLineRanges(prepared, 320, line => {
  if (line.width > maxW) maxW = line.width
})
// maxW = the minimum container width that won't overflow

And layoutNextLine() handles variable-width columns β€” the canonical case of text flowing around a floated image:

let cursor = { segmentIndex: 0, graphemeIndex: 0 }
let y = 0

while (true) {
  const width = y < image.bottom ? columnWidth - image.width : columnWidth
  const line = layoutNextLine(prepared, cursor, width)
  if (line === null) break
  ctx.fillText(line.text, 0, y)
  cursor = line.end
  y += 26
}

What Makes Pretext Different

Benchmark Numbers

From the project's own checked-in benchmark on a shared 500-text batch:

  • prepare(): ~19ms (one-time, cached)
  • layout(): ~0.09ms (hot path, pure arithmetic)

For context, a single getBoundingClientRect() call on a moderately complex DOM subtree can take 1-5ms on a mid-tier device. Pretext's hot path is 10-50x faster than DOM measurement, with zero side effects.

Full Unicode + Emoji + Bidirectional Support

Pretext handles text shaping correctly across all languages. The README specifically calls out support for emojis and mixed bidirectional (bidi) text β€” Arabic mixed with English, Hebrew mixed with numbers. The library uses the browser's own font engine as the source of truth, so it shapes text exactly the way the DOM will.

// Mixed scripts, emojis, bidirectional β€” all handled correctly
prepare('AGI spring is here. Ψ¨Ψ―Ψ£Ψͺ Ψ§Ω„Ψ±Ψ­Ω„Ψ© πŸš€', '16px Inter')

Two-Step Cache Architecture

The prepare() / layout() split is the key performance insight. prepare() is expensive (canvas measurement, text segmentation, bidi reordering) but cached. layout() is arithmetic on pre-computed data. You pay the setup cost once; the hot path stays cold.

Resize? Only layout() reruns. Font change? Only prepare() reruns for affected text. This is the kind of API design that makes AI code generation tractable β€” the agent can call layout() in a tight loop without anxiety.

The Caveats

Pretext is explicit about what it doesn't do (yet):

  • It targets white-space: normal, word-break: normal, overflow-wrap: break-word, line-break: auto β€” the common case, not every CSS text model
  • system-ui is unsafe for measurement accuracy on macOS β€” you need a named font
  • It's not a full font rendering engine β€” it doesn't handle some advanced OpenType features, but for the 95% case of measuring where text will break, it covers everything

The white-space: pre-wrap mode is also supported for textarea-like cases where spaces, tabs, and newlines are preserved.

Who Is This Actually For

The obvious answer is UI library authors and virtualization layer maintainers. But Pretext's README makes an interesting observation about AI coding agents:

"Development time verification (especially now with AI) that labels on e.g. buttons don't overflow to the next line, browser-free"

When an AI generates UI code, it currently has no way to know if a label will overflow without running the code in a browser. Pretext gives AI agents the ability to predict text layout at generation time β€” before the code runs. That's a meaningful capability for AI-assisted UI development. For a broader look at how AI coding tools like Claude Code and Cursor are evolving the developer experience, see our AI dev tool rankings.

The library also matters for:

  • Canvas/SVG rendering where you don't have a DOM at all
  • Server-side layout calculation (server-side rendering without a DOM)
  • Game UIs built on canvas
  • Native app toolkits that embed a JS engine but don't expose the browser's layout system

Bun is one such environment where Pretext's approach shines β€” with its embedded JavaScript engine and native TypeScript support, Pretext can calculate layouts server-side without a DOM at all.

Installation

npm install @chenglou/pretext

Demos live at chenglou.me/pretext. The source is on GitHub.


Pretext is Cheng Lou's second act in the text rendering space β€” building on his earlier work on text-layout from a decade ago. Sebastian Markbage's original text-layout design (canvas measureText for shaping, bidi from pdf.js, streaming line breaking) informed the architecture that Pretext now carries forward.

Related articles

More coverage with overlapping topics and tags.

Oxc Is Quietly Building the Fastest JavaScript Toolchain in Rust β€” And It's Almost Ready
javascript

Oxc Is Quietly Building the Fastest JavaScript Toolchain in Rust β€” And It's Almost Ready

While ESLint v10 was wrestling with legacy cleanup, the Oxc project shipped a linter 100x faster, a formatter 30x faster than Prettier, and a parser that leaves SWC in the dust. Here's what the JavaScript oxidation compiler actually is.
Knip v6 Lands oxc Parser for 2-4x Performance Gains Across the Board
typescript

Knip v6 Lands oxc Parser for 2-4x Performance Gains Across the Board

The popular dependency and unused-code scanner for JavaScript and TypeScript gets a major overhaul, replacing its TypeScript backend with the Rust-based oxc-parser β€” and the results are dramatic.
Vue 3.5: The 'Minor' Release That Rewrote the Rules of Frontend Performance
javascript

Vue 3.5: The 'Minor' Release That Rewrote the Rules of Frontend Performance

Vue 3.5 arrived with no breaking changes and a set of internals improvements that should make any developer pay attention β€” 56% less memory usage, lazy hydration, and a stabilized reactive props API.

Comments

Log in Log in to join the conversation.

No comments yet. Be the first to share your thoughts.