Each lyric line plays only that line of the YouTube video, auto-stops, then expands into the JP/EN study tiles for that line.
Lyrics and timestamps come from LRCLIB (ingested once, cached locally as JSON β no live API call on page load).
Each line is its own micro-player. Tap to play just that line; video auto-pauses at the next line. Tap again to stop.
Tap the chevron to drop down study tiles β JP words on the left, English meaning on the right. Real R2-hosted Neural2 TTS, pulled from the production song page.
Ingestion pattern: the .lrc from LRCLIB is parsed and saved to data/inochi-mijikashi.json. To add more songs, fetch + parse + drop a new JSON file in that folder.
Missing from LRCLIB: word-level timing. LRCLIB is line-level only. Word-level (karaoke mode) lives on Apple Music / Musixmatch β that's where the next probe goes.
Word β line matching: substring match. If a word from data.json appears in a line's text, it shows up in that line's drop-down. Not perfect (conjugated forms can miss) β v2 will refine.