Each word lights up as it's sung — karaoke-style highlighting inside each tap-to-play line. Word timing is estimated (LRCLIB is line-level only), but it feels alive.
Words light up while the line plays. The line text is split into vocab-aware tokens (longest vocab match wins; falls back to single characters), each token gets a time slice weighted by character count, and the current token gets a pulse-glow as playback crosses it.
Post-line tint. Already-sung words fade to a muted shade so you see what just went by.
Everything else from v1 still works: tap line to play, chevron drops down study tiles, offset slider for MV drift.
How word timing works here: LRCLIB only gives a timestamp per line. We split each line into tokens (vocab word matches + leftover chars), weight each token by char count, and proportionally slice the line's duration. Result is plausible, not true karaoke timing.
For real word-level timing we'd need Apple Music / Musixmatch (word-timed) — that's the direction the Apple Lyrics probe is heading.