back to lab
Lab Β· Tap Lines

Tap a line. Hear it. Learn it.

Each lyric line plays only that line of the YouTube video, auto-stops, then expands into the JP/EN study tiles for that line.

What's different

Lyrics and timestamps come from LRCLIB (ingested once, cached locally as JSON β€” no live API call on page load).

Each line is its own micro-player. Tap to play just that line; video auto-pauses at the next line. Tap again to stop.

Tap the chevron to drop down study tiles β€” JP words on the left, English meaning on the right. Real R2-hosted Neural2 TTS, pulled from the production song page.

🌸
γ‚€γƒŽγƒγƒŸγ‚Έγ‚«γ‚·γ‚³γ‚€γ‚»γƒ¨γ‚ͺγƒˆγƒ‘
γ‚―γƒͺープハむプ Β· Life is Short, Fall in Love, Maiden
src: lrclib.net #12182810 Β· 19 lines Β· 3:12
line β€” Β· 0:00
loading…
offset 0.0s
β€” lyrics β€”
loading lines…

Ingestion pattern: the .lrc from LRCLIB is parsed and saved to data/inochi-mijikashi.json. To add more songs, fetch + parse + drop a new JSON file in that folder.

Missing from LRCLIB: word-level timing. LRCLIB is line-level only. Word-level (karaoke mode) lives on Apple Music / Musixmatch β€” that's where the next probe goes.

Word β†’ line matching: substring match. If a word from data.json appears in a line's text, it shows up in that line's drop-down. Not perfect (conjugated forms can miss) β€” v2 will refine.