Interhuman AI raises €2M pre-seed round
Interhuman AI raises €2M pre-seed round
-
Interhuman AI raises €2M pre-seed round

Social intelligence
for any AI application.

A single API that detects and interprets behavioral signals across video, audio, and text in real-time, enabling applications that adapt to how people actually communicate.
Inter-0

The first multimodal model for social intelligence.

Our first model Inter-0 detects agreement, confusion, engagement, hesitation, and 8 more observable behavioral signals - giving AI applications granular insights about the human on the other side of the screen.

How it works

Trained to understand context

Our model analyzes video, audio, and text together. Not as separate streams, but as integrated context.

It interprets the signals it detects, based on the context of the conversation. A pause means something different in negotiation training versus medical communication practice.

Our Mission

We believe AI should understand people, not just their words. Our mission is to humanize artificial intelligence by building the social intelligence layer that bridges the gap between AI systems and human interaction.

By teaching machines to recognize and respond to the subtle signals that make communication truly human, we're creating a future where technology adapts to people—not the other way around.

Meet our CEO

A few words from our CEO, Paula Petcu on why we're focusing on developing AI that understands human social signals and emotions.

Hesitation
Hesitation
Hesitation
Hesitation
Hesitation
Hesitation
Our Technology

Emotional Intelligence

Our AI simultaneously processes body language, facial expressions and voice patterns, detecting 7 distinct emotional states and emotional intensity.

Real-time Multimodal Analysis

Real-time processing of behavioral, emotional, and contextual data, creating a rich understanding of human interaction that goes beyond words.

Behavioral Analysis

Our system analyzes affect patterns, activity states, and contextual markers to detect subtle behavioral signals and interaction dynamics.

Adaptive Response System

LLM-powered system adapts in real-time to emotional states and behavioral cues, delivering natural conversation flow with context-appropriate tone.

Our Research

We research models and how to align them with human emotions and social signals to enhance interactions.

Interpretability by design using computer vision for behavioral sensing in child and adolescent psychiatry
Using computer vision for behavioral sensing in child and adolescent psychiatry, this study assesses the accuracy of ML-derived behavioral codes from clinical interview videos, comparing them with human expert ratings to improve reliability and scalability in psychiatric diagnostics.
Beyond Accuracy: Fairness, Scalability, and Uncertainty Considerations in Facial Emotion Recognition
The study examines the current state of FER models, highlighting issues of fairness, scalability, and robustness. The study proposes metrics and algorithms to assess and improve these aspects, emphasizing the importance of fair and reliable FER models in clinical applications and beyond.
Scaling-up Behavioral Observation with Computational Behavior Recognition
The study proposes using open-source AI tools to automate behavioral coding in parent-child interactions and therapy sessions. This method enhances scalability, consistency, and depth of analysis, addressing traditional human coding limitations. The study discusses privacy, bias, and validation methods, highlighting the potential for these tools in psychological research and clinical practice.
Backed and supported by:

Team

Our founding team combines deep AI research, data science, business leadership, and advanced machine learning.

Backed and supported by:
Works on top of any model
Seamless automation
use-cases
  • Sales Training

    Roleplay

    Roleplay

    Roleplay

  • Sales Training

    Roleplay

    Roleplay

    Roleplay

  • Sales Training

    Roleplay

    Roleplay

    Roleplay