Skip to content

Industry Activities

Work Experience

Snap Inc. — London, UK

Human3D Lead, GenAI/ML · May 2024 – Present

Led a team of ~15 CV/ML engineers (UK, US)

  • 3D body and hand modelling, reconstruction and tracking
  • Controlled human image & video synthesis for GenAI use-cases
  • Text-to-3D animatable characters

Manager, Computer Vision · Sep 2020 – May 2024

Human understanding & generation

  • 3D body and hand modelling, reconstruction and tracking
  • Pixel-level real-time mobile human perception

Ariel AI — London, UK

Co-Founder, Chief of R&D (acquired by Snap) · Nov 2018 – Sep 2020

Co-founded Ariel AI to bring research ideas to products that reach outside academia

  • Real-time body, hand reconstruction on mobile devices
  • Statistical parametric models for bodies and hands

Selected Industry Projects

Below I list selected industry projects I led or contributed to with brief technical context.

Body Tracking and Reconstruction

Our in-house statistical parametric models of human body shape and pose underpin a suite of on-device 3D reconstruction systems for robust, real-time 3D human AR on mobile. Our reconstruction pipeline estimates articulated pose and subject-specific shape in camera space and exposes a rig-compatible skeleton for driving meshes in real time.

In Lens Studio, our technology surfaced as 3D Body Tracking, mapping joint rotations and positions to creator models. For surface-aware effects, Body Mesh provides a template mesh with stable UVs and fixed topology serving as a reference surface for garment attachment and silhouette edits. The video-to-animation workflow enables drag-and-drop motion extraction and retargeting onto humanoid models, reducing manual keyframing.

3D Body Tracking: video-to-animation (left), real time body mesh reconstruction (right)

On-device 3D body mesh reconstruction, together with External Mesh, enables runtime garment retargeting: garment meshes are driven by estimated body pose and shape with occlusion-aware shading. This foundation powers digital fashion and virtual try-on. An example is VoguexSnapchat AR fashion exhibition—a collaboration between Snap and leading fashion houses as part of London Fashion Week, extending beyond the venue, users worldwide accessed the experiences.

Virtual try-on and digital fashion: real-time try-on experiences by creators and brand partners (left), Vogue × Snapchat “Redefining the Body” exhibition powered by our body tracking technology (right)

Controllable Human Generation

We leverage our human perception stack — body/hand/face pose, instance segmentation, person-aligned normals/depth, and pixel correspondences — as control signals for image and video generation. Conditioning diffusion-based generators on these signals constrains structure, viewpoint, and motion, enabling geometry-consistent synthesis. This supports multiple GenAI products, such as real-time, on-device full-body AI effects in AR and personalized image and video experiences, where appearance is preserved while style or motion is driven by prompts or exemplars.

Controlled generation of humans enabled many GenAI/AR experiences. Real-time generative AR full-body experiences (left), an example controlled video generation experience (right)

3D Animatable Character Generator

Conventional asset creation is labor-intensive—concept design, modeling, UV unwrapping, texturing, retopology, and rigging—each step requiring expert time and iteration. In Body Generator, a user specifies a prompt e.g., “astronaut mouse in a bomber jacket”, adjusts an intensity parameter that controls geometric deformation, optionally removes the head for costume use, and iteratively updates geometry and materials via additional prompts. Outputs are previewed and then imported into Lens Studio for animation and deployment. The 3D animatable assets can be used in XR experiences, games, etc.

Body Generator (Lens Studio GenAI Suite) generates riggable 3D characters from text or image prompts, reducing or eliminating manual 3D modeling

Real Time 3D Hand Tracking

Our on-device 3D hand tracking system is based on proprietary statistical parametric models of human hand shape and pose. By exposing a rig-compatible representation in Lens Studio, the system enables robust hand-anchored AR: physically stable effects attached to fingers and palm, interaction with virtual objects under viewpoint changes, and consumer try-on scenarios such as rings. Delivering this reconstruction capability to creators as a reusable tool substantially lowered the barrier to building hand-centric AR, and has supported a wide variety of popular experiences in the community.

Examples of real-time hand-tracking AR experiences from community and sponsors: visual effects, 3D nails, object holding, ring try-on.

Pixel-level real-time mobile human perception

We released a suite of pixel-level human perception models that operate in real time on-device, exposing dense signals directly within Lens Studio. Body Instance Segmentation produces subject-specific binary masks (indexed consistently across features), enabling matting, background replacement, and subject-aware compositing. The Person-Aligned Normals and Depth model provides per-pixel surface orientation and distance over the visible body, supporting physically coherent relighting, occlusion, and collision with virtual content. Pixel-level correspondence maps each image pixel on the person to a canonical body parameterization, allowing stable, low-latency warping of garments and textures onto the user - enabling real-time garment try-on (as shown on the right below).

Pixel-level real-time mobile human perception. Depth, surface normals, instance segmentation (left), garment transfer enabled by pixel-level correspondences (right).