"Inference Engineering" is now available. Get your copy here

Deploy to Reality: A VR Room Experience & Happy Hour

A VR happy hour for engineers who like their simulations immersive and their inference fast.

Step out of your local dev environment and into a full-scale VR room.

​Join Baseten for an evening of immersive VR gameplay, drinks, and conversation with AI builders and leaders.

​Collaborate, compete, or just see what happens when reality gets a little virtual.

What to expect:

  • ​🕶️ Immersive VR: Every attendee gets 40 minutes to enjoy a fully interactive room-scale experience.

  • ​​🍹 Drinks & bites: beer, wine, cocktails, and food to keep the runtime stable

  • ​​🧠 IRL connection: meet fellow engineers and AI leaders

  • ​​🎁 Fun extras: swag and a few surprises along the way

​No prior VR experience required. No pitches. No panels. Just good vibes and shared simulations.

​👉 Space is limited. Save your spot.

​___

About Baseten

​This event is co-sponsored by ​Baseten — the lightning-fast, highly reliable, and massively scalable production inference platform. In the inference world, speed = money, and Baseten is the fastest inference platform out there. See how Zed and Amp are making every millisecond count.

About Resolve AI

Resolve AI is AI for prod: AI agents that work across code, infrastructure, telemetry, and knowledge. Their goal is to help every engineer operate production fluently without being bottlenecked by context, expertise, and tools. Companies like Coinbase, DoorDash, Zscaler, and Gametime use Resolve AI to automate alert triage, accelerate incident resolution, simplify production debugging, and bring production context into development.  Learn more at resolve.ai.

About Mixedbread

​Mixedbread is a production-ready multimodal search engine that supports 300+ languages and powers fast, accurate retrieval at scale. With the first production-grade multi-vector system, teams can make video, audio, code, and text searchable and usable for AI. Mixedbread handles the full pipeline, from processing to indexing. It already serves 1B+ documents in production and delivers sub-100ms search latency.