The social intelligence layer for AI
Backed and supported by:

Social-Aware AI

Our multimodal AI system processes the complexity of human behavior in real-time - analyzing body language, facial expressions, voice patterns, and contextual signals simultaneously.

By combining emotional state detection with behavioral analysis, we enable AI that truly understands human interaction.

Decoding human behaviour

Our social intelligence layer integrates with any AI system, enabling it to detect and respond to the non-verbal cues that make up 93% of human communication. By recognizing subtle facial expressions, body language, and voice patterns, your AI can adapt in real-time to users' emotional states and engagement levels.

Our Mission

We believe AI should understand people, not just their words. Our mission is to humanize artificial intelligence by building the social intelligence layer that bridges the gap between AI systems and human interaction.

By teaching machines to recognize and respond to the subtle signals that make communication truly human, we're creating a future where technology adapts to people—not the other way around.

Meet our CEO

A few words from our CEO, Paula Petcu on why we're focusing on developing AI that understands human social signals and emotions.

Our Technology

Emotional Intelligence

Our AI simultaneously processes body language, facial expressions and voice patterns, detecting 7 distinct emotional states and emotional intensity.

Real-time Multimodal Analysis

Real-time processing of behavioral, emotional, and contextual data, creating a rich understanding of human interaction that goes beyond words.

Behavioral Analysis

Our system analyzes affect patterns, activity states, and contextual markers to detect subtle behavioral signals and interaction dynamics.

Adaptive Response System

LLM-powered system adapts in real-time to emotional states and behavioral cues, delivering natural conversation flow with context-appropriate tone.

Our Research

We research models and how to align them with human emotions and social signals to enhance interactions.

Interpretability by design using computer vision for behavioral sensing in child and adolescent psychiatry
Using computer vision for behavioral sensing in child and adolescent psychiatry, this study assesses the accuracy of ML-derived behavioral codes from clinical interview videos, comparing them with human expert ratings to improve reliability and scalability in psychiatric diagnostics.
Beyond Accuracy: Fairness, Scalability, and Uncertainty Considerations in Facial Emotion Recognition
The study examines the current state of FER models, highlighting issues of fairness, scalability, and robustness. The study proposes metrics and algorithms to assess and improve these aspects, emphasizing the importance of fair and reliable FER models in clinical applications and beyond.
Scaling-up Behavioral Observation with Computational Behavior Recognition
The study proposes using open-source AI tools to automate behavioral coding in parent-child interactions and therapy sessions. This method enhances scalability, consistency, and depth of analysis, addressing traditional human coding limitations. The study discusses privacy, bias, and validation methods, highlighting the potential for these tools in psychological research and clinical practice.

Team

Our founding team combines deep AI research, data science, business leadership, and advanced machine learning.

Backed and supported by:

This heading will reveal with a basic scrambling effect
on page load

this is an example of a heading that is triggered by a scrolltrigger

You can even control the characters that are used during scramble

and here's how to work with scramble text on hover:

How to scramble on hover