THE CURATED LOG XXXVII
By MITO Universe - @mito.universe
Welcome back to MITO Universe.
This week, presence became performance.
The question reshaping AI video this April isn’t “what can it generate?” but “where does it show up?” Pika Labs dropped PikaStream 1.0, a real-time model that gives AI agents faces, voices, and the capacity to join Google Meet calls at 24fps with 1.5-second latency. Not video-as-deliverable. Video-as-interface. When agents materialize on-screen with expressive avatars and cloned voices, the shift is structural: from batch rendering to live responsiveness, from asset generation to continuous presence.
Meta followed days later with Muse Spark, its first major model in a year and the debut from Alexandr Wang’s Meta Superintelligence Labs. Scored at 52 on the Intelligence Index, the model slots into fourth globally behind Gemini, GPT, and Claude. The strategy reveals itself clearly: proprietary over open-source, consumer-facing over developer-first, health reasoning and shopping modes over code. A pivot toward personal superintelligence monetized through ads, API previews, and platform integration across WhatsApp, Instagram, and AI glasses.
Meanwhile, Alibaba executed what might be the cleanest stealth launch in AI video history. HappyHorse-1.0 appeared anonymously on leaderboards April 7, climbed to #1 in blind tests for text-to-video and image-to-video, then revealed itself April 10 once scores were locked. The anonymous debut wasn’t accident — it was proof-by-merit, algorithmic credibility established before brand capital entered the frame.
Below, artists working with AI as authored practice, and one Universe creator whose work transforms product photography into existential still life.
SELECTED CREATORS
The MITO Universe grows through the people who give it form. Each week, we spotlight a creator working with AI as a deliberate, authored practice. If you’re shaping worlds, systems, or a distinct visual language, this is where it belongs. Become part of the community and start building your own MITO Universe account.
MITO UNIVERSE CREATIVE SPOTLIGHT:
Ivan Georgiev | @ivandesignx
Ivan Georgiev operates in the gap between product photography and surrealist proposition, generating images where everyday consumer objects achieve narrative density through juxtaposition, staging, and atmospheric treatment. The Bulgarian designer works primarily with Midjourney to construct scenarios where mundane items — cosmetics, beverages, household products — appear as protagonists in scenes that feel simultaneously commercial and dreamlike.


What distinguishes the practice is refusal of straightforward product visualization. Georgiev positions bottles in impossible spaces, arranges packaging against backgrounds that shift from hyperreal to abstract, lights objects with the drama usually reserved for portrait photography. A skincare product becomes monumental sculpture. A beverage can occupies surreal landscape. Common items receive treatment suggesting they matter beyond their utility, that they carry symbolic weight or participate in unstated narratives.
The work mines commercial photography’s visual languages while subverting its functional purpose. These images could be advertisements if they weren’t so strange, could be still lifes if they weren’t so clearly about branding. Georgiev locates the surreal inside product culture itself, suggesting that consumer objects already operate as totems, already participate in symbolic systems that exceed their stated purpose. The AI doesn’t impose strangeness on ordinary things; it reveals strangeness already latent in how commodities circulate through visual culture.


The color palette tends toward jewel tones and moody atmospherics — deep blues, rich burgundies, emerald greens rendered with lighting that emphasizes volume and texture. Objects glow, surfaces reflect, materials announce their tactility even as they remain purely digital. What emerges is product photography reimagined as existential still life, where the commodity form becomes site for investigating how objects acquire meaning through framing, context, and the particular magic that occurs when something designed for use is instead contemplated as image.
All of this work was fully conceived, art-directed, and produced through generative AI as primary medium, with Georgiev’s directorial eye determining which outputs earn visibility and which remain in the generative archive.
Anna Condo | @anna.condo
Anna Condo treats the flower as site where biology dissolves into pure structure. The Armenian-born, France-raised artist approaches botanical subjects through AI not as botanical illustration but as investigation into what remains when organic form passes through algorithmic interpretation. From 2013 to 2022, her photographic practice positioned each bloom as portrait — alive, complex, insistent on presence. Since discovering generative tools in 2022, that same obsession relocated: the flower persists, but now filtered through systems that reduce petal to line, color to code, volume to vector.
What surfaces in her COLOR CODE series isn’t botanical accuracy but emotional cartography rendered through chromatic intensity. Blues sharp enough to feel cold. Golds laid over foliage like gilt on medieval manuscript. Tomatoes hanging among architectural linework that exposes the scaffolding beneath the image — not hidden, but visible as the structural logic organizing organic chaos. The wireframe grids, the geometric overlays, the technical drawing aesthetic: these aren’t glitch or artifact but deliberate exposure of how AI constructs illusion from mathematics.


Condo’s background in film and photography operates as directorial training applied to generative output. She describes the process as choreography, moving through iterations until something sparks — that quiet recognition signal indicating “that’s it, that’s me.” The work isn’t about technical mastery but sustained conversation with the machine, prompting and refining until algorithmic suggestion aligns with authorial intent. Each image functions as film still extracted from narrative that exists only as implication, characters and scenes assembled through color palette and compositional framing rather than literal depiction.
The flower persists across decades and media shifts because for Condo it carries the same contradiction she locates in human presence: fragile and fierce, ephemeral yet eternal. What AI enables isn’t replacement of photography but extension into territories where imagination previously remained unvisualized. The work stretches reality, she says, creating entire worlds that once lived only in her mind. The technical apparatus becomes medium for externalizing internal vision, translating intuition into image through collaborative negotiation with systems trained on collective visual history.


Where her floral photography positioned blooms as unapologetically feminine subjects deserving same attention as human portraiture, the AI work abstracts that femininity into pattern, texture, chromatic relationship. The subject remains constant. The investigation deepens. What does the flower become when processed through neural networks? What persists when organic complexity encounters digital reduction? The answer Condo provides: something that breathes differently but still breathes, beauty transformed but not diminished, presence relocated from physical to conceptual space yet somehow more vivid for having passed through the machine.
WHAT’S NEW
ElevenLabs Expands from Voice to Full Multimedia Audio Engine
ElevenLabs evolved from voice synthesis into integrated production environment combining text-to-sound-effect generation, Audio Studio 3.0 with timeline-based video editing, and AI music composition for mood and genre-specific scoring. The expansion from single-modality tool to comprehensive audio stack follows structural logic: once voice reaches indistinguishability from human speech, adjacent infrastructure demands surface. Sound effects, music, and video timeline integration complete the layers required for AI-first production without external DAW dependency. The platform shift demonstrates how capability in one domain creates pressure toward vertical integration — voice excellence becomes insufficient once video workflows require coordinated multi-track audio.
Lightricks Confirms LTX-2 as First Open-Source 4K Video Model with Audio
Lightricks positioned LTX-2 as milestone for local AI video creation, releasing model weights that generate up to 20 seconds of 4K video with built-in audio, multi-keyframe support, and controllability adaptations. NVIDIA optimized inference through NVFP8 and NVFP4 data formats, delivering 3x faster performance and 60% VRAM reduction on RTX 50 Series GPUs. The open-weights designation carries technical weight: training code and methodology published under OSI-approved license, enabling modification and integration without cloud API dependency. LTX-2 positions as alternative to metered platforms, offering local execution with predictable costs for production workflows requiring stable tooling. ComfyUI integration supports 4K upscaling pipeline, with performance gains of 40% on NVIDIA GPUs through recent optimization passes. The model represents convergence of open-source accessibility with production-grade output quality previously exclusive to proprietary platforms.
Alibaba Reveals HappyHorse-1.0 After Anonymous Leaderboard Domination
Alibaba confirmed ownership of HappyHorse-1.0 on April 10, three days after the model appeared anonymously on Artificial Analysis leaderboards and climbed to #1 in both text-to-video and image-to-video blind rankings. Developed by Alibaba’s ATH AI Innovation Unit, the model scored 1333 Elo in text-to-video without audio and 1392 in image-to-video, beating previous leaders by 60 and 37 points respectively. The stealth deployment — releasing anonymously, earning validation through blind human evaluation, then revealing authorship once scores secured — marks tactical repositioning around merit-first credibility. The timing capitalizes on collapsed Western competition: OpenAI discontinued Sora in March, ByteDance paused Seedance 2.0’s global rollout following Hollywood copyright disputes. HappyHorse currently remains in closed beta with API rollout confirmed forthcoming. Official pages claim full open-source release including base, distilled, and upscaling versions, though GitHub and HuggingFace links display “Coming Soon” as of April 10. Alibaba’s Hong Kong shares rose 2.12% on disclosure day.
KEY VISUAL
Pavel Krikunov @pvlkrknv
Pavel Krikunov constructs visual atmospheres where analog and generated coexist without hierarchy. The Moscow Film School-trained cinematographer, now based in New York and represented by VX Media, operates from a grunge-nocturnal sensibility that runs through both his on-set work and his experimentation with generative tools. He’s not an AI creator who learned camera, but a cinematographer who integrated AI as an extension of a technical arsenal that includes Lytro, Fuji Rensha Cardia, Soviet Peleng 8mm, Olympus with intentionally broken sensor.
The visual language maintains aesthetic coherence independent of tool: architectural spaces in shadow, chromaticism reduced to sodium/cyan/concrete, figures traversing liminal zones. The constant reference to Eastern European science fiction — Tarkovsky, Stalker — isn’t nostalgic citation but visual grammar applied systematically. When he integrates generated elements with photographic base, the seam between captured and synthesized becomes visible not as technical error but as formal statement about the hybrid nature of contemporary image.
What makes the work formative isn’t explicit tutorials but sustained demonstration of cinematographic criterion applied consistently. Each piece functions simultaneously as autonomous visual object and as case study on how to maintain aesthetic coherence when production process combines traditional capture, editing, color correction and algorithmic synthesis. The chronological feed operates as implicit curriculum where education happens not through verbal explanation but through pattern recognition: observing how decisions about light, composition, timing persist regardless of whether the frame originates from sensor or generative model.
The proposition is that technical mastery of tools — analog or computational — remains subordinate to defined artistic vision. Krikunov doesn’t teach “how to use AI” or “how to make grunge aesthetic,” he teaches how to think cinematographically when the set can be physical or latent, when the camera can be optical or algorithmic, when the criterion for what constitutes compelling image remains constant while the means to produce it multiply.
That’s all for now — we’ll be back in your inbox next week.



