NestCall AR | Data Visualization in Mixed Reality

AI, Sound Recognition, Teachable Machine, Mixed Reality Interactive Design, Gesture Interaction, Data Visualization in MR
Client
Role
Developer, Interaction Designer, Video Producer
Time
Nov, 2023 - Sept, 2024
Result

Concept Overview

The NestCall AR is an innovative Mixed Reality (MR) project inspired by the integration of AI technologies and the natural world. Leveraging the extensive bird observation database from eBird and the machine learning capabilities of Google’s Teachable Machine, this project enables users to uncover and interact with normally hidden or elusive bird species. Users explore their environment, such as a campus, where birds are identified through sound recognition, adding them to a personalized account in a gamified manner, much like Pokémon GO. This immersive experience fosters environmental awareness by allowing users to engage with virtual representations of birds via AR and hand gestures, learning more about their behavior and characteristics.

Design Concept

Detailed Design

The platform integrates sound-based bird recognition, AR visualization, and gesture-based interaction to create an engaging and educational experience. The core features include:

Bird Sound Recognition:

Users explore their surroundings, where bird calls are captured in real-time.

Recognized birds are added to the user's account as digital collectibles, building a personalized "bird collection."

AR Visualization: Once a bird is recognized, its digital avatar appears in the user's AR view, allowing them to "see" and interact with species that are otherwise hidden or hard to observe in nature.

Gesture-Based Interaction: Hand gestures allow users to interact with the virtual bird, such as feeding it, guiding its movement, or triggering animations that simulate natural bird behaviors.

Information Access: Users can view detailed information about the identified bird species, including their habitat, migration patterns, and conservation status, integrating educational content into the experience.

Prototype Development

To develop the prototype, the project combined sound recognition and gesture-based interaction into an interactive AR platform:

  1. Sound Recognition:
    • A Teachable Machine model was trained with audio samples of two bird species.
    • The prototype successfully identifies these bird calls in real-time, enabling the platform to recognize species present in the user's environment.
  2. Gesture Recognition:
    • Using p5.js and ml5.js, a machine learning model was implemented to detect user hand gestures.
    • The model links gestures to bird behaviors, such as flying or chirping, providing a dynamic and interactive user experience.
  3. Integration:
    • AR functionality combines sound recognition and gesture-based interaction.
    • Users are seamlessly guided from identifying a bird through its call to engaging with its virtual representation.

Prototype Video

Concept Video

Other projects