A rotating 3D Earth inside a browser, with a search bar above it; a typed phrase like "best ramen" causes the camera to swing toward a glowing pin over a real-world city.
Type "where they make the best ramen" into a search box. Watch a 3D Earth spin around and pin Fukuoka, Japan. That is the entire pitch for Joshua Lochner's Semantic Globe, and it is much weirder than it sounds, because nothing about it ever talks to a server. The query gets embedded by a small language model in your browser. The planet is rendered right next to it. Both halves of the demo are sharing the same GPU, in the same tab, neither aware the other is also a tenant.
That coexistence is the actual trick. You are watching Transformers.js push text embeddings through WebGPU while Three.js shades the globe on the same chip, the cosine similarity quietly resolving "somewhere cold and lonely" or "a place with many cathedrals" into a coordinate the camera then arcs toward. The swoop is the easy part. The interesting part is that the entire pipeline, tokenizer to fragment shader, never leaves your machine.
Pop the Hugging Face Space open and click Files to see how short the source actually is. Then try a few queries that have no business resolving to a real place.