Interhuman is an AI research and products studio.
Backed by

Social-Aware AI

Our multimodal AI system processes the complexity of human behavior in real-time - analyzing body language, facial expressions, voice patterns, and contextual signals simultaneously.

By combining emotional state detection with behavioral analysis, we enable AI that truly understands human interaction.

Offstage:
AI roleplays for training

Every company knows that practice improves performance. But traditional roleplay training is expensive, inconsistent, and hard to scale. Offstage changes this by making effective practice available anytime, anywhere. Our AI roleplays analyze communication style, emotional intelligence, and decision-making in real-time.

Learners get instant feedback, managers see measurable improvements, and companies can finally scale soft skills training that works.

Our Mission

Decoding Human Interaction
We build AI systems that understands social signals and emotions in real-time. Our technology powers applications that solve real human challenges, starting with more effective professional training through AI role-play.

From Research to Impact — Combining neuroscience, psychology, and computer science to create practical AI solutions that enhance how people learn, communicate, and grow.

Why Social-Aware AI?

A few words from our CEO, Paula Petcu on why we're focusing on developing AI that understands human social signals and emotions.

Our Technology

Emotional Intelligence

Our AI simultaneously processes body language, facial expressions and voice patterns, detecting 7 distinct emotional states and emotional intensity.

Real-time Multimodal Analysis

Real-time processing of behavioral, emotional, and contextual data, creating a rich understanding of human interaction that goes beyond words.

Behavioral Analysis

Our system analyzes affect patterns, activity states, and contextual markers to detect subtle behavioral signals and interaction dynamics.

Adaptive Response System

LLM-powered system adapts in real-time to emotional states and behavioral cues, delivering natural conversation flow with context-appropriate tone.

Our Research

We research models and how to align them with human emotions and social signals to enhance interactions.

Interpretability by design using computer vision for behavioral sensing in child and adolescent psychiatry
Using computer vision for behavioral sensing in child and adolescent psychiatry, this study assesses the accuracy of ML-derived behavioral codes from clinical interview videos, comparing them with human expert ratings to improve reliability and scalability in psychiatric diagnostics.
Beyond Accuracy: Fairness, Scalability, and Uncertainty Considerations in Facial Emotion Recognition
The study examines the current state of FER models, highlighting issues of fairness, scalability, and robustness. The study proposes metrics and algorithms to assess and improve these aspects, emphasizing the importance of fair and reliable FER models in clinical applications and beyond.
Scaling-up Behavioral Observation with Computational Behavior Recognition
The study proposes using open-source AI tools to automate behavioral coding in parent-child interactions and therapy sessions. This method enhances scalability, consistency, and depth of analysis, addressing traditional human coding limitations. The study discusses privacy, bias, and validation methods, highlighting the potential for these tools in psychological research and clinical practice.

Team

Our founding team combines deep AI research, data science, business leadership, and advanced machine learning.