Blog

How we built a robust gesture recognition system using standard 2D cameras
Sep 14, Twenty Billion Neurons
A bespoke storage format for deep learning on videos
Sep 8, Valentin Haenel
Understanding what neural networks see when classifying videos
Aug 22, Raghav Goyal
Video
Wide Range of Applications
Use cases range from human gesture recognition to security video monitoring
Folder
Custom Datasets
We own some of the largest industry datasets for intelligent video analysis
Time
Runs in Real-time
Our software extracts meaning from continuous video streams in real-time
Document
Flexible Licensing
Licensing options range from one-off royalty to subscription models
Mission

Our mission



When humans solve intelligent tasks, they rely heavily on their common sense knowledge about the world. A detailed understanding of the physical world is however still largely missing from current applications in artificial intelligence and robotics. Our mission is to change that. We are developing new, ground-breaking technology that allows machines to perceive the world like humans.

How it works



Our technology can analyze video and extract meaning from it in real-time. We build deep learning systems using our large video datasets about common world situations and then fine-tune them to specific use cases with minimal effort.


A novel database

One of the limiting factors for advancing video understanding is the lack of large and diverse real-world video datasets. To circumvent this bottleneck, we have built a scalable crowd-acting platform and have created some of the largest industry video datasets for training deep neural networks.

A unique approach

Our deep neural networks are pre-trained on our datasets of crowd-acted videos. The datasets contain short video clips that show a wide range of physical and human actions. We then transfer the capabilities of our trained network to contribute to specific video applications.

Illustration
01

Step one

We pre-train deep neural networks on a foundational dataset for understanding physical actions. A large amount of annotated video data is required so that the models develop an internal representation of how objects interact in the real world.

02

Step two

We transfer this intrinsic knowledge to solve problems of high complexity that rely on an understanding of these fundamental physical concepts. The data requirements are drastically lower, allowing us to solve a large variety of video use cases.

Use Cases



Our machine learning systems excel at deciphering complex human behavior in video



Gesture recognition
Gesture Recognition
Automatic detection of dynamic hand gestures for human-computer interaction
Social robotics
Personal Home Robots
Visual scene understanding for domestic robots interacting with humans
Elderly care
Elderly Fall Detection
Automatic detection of accidental falls among elderly people at home
Aggression detection
Aggression Detection
Automatic detection of aggressive behavior in public transport systems
Theft detection
Theft Detection
Automatic detection of suspicious activity like theft or shoplifting in stores
Video based ad suggestions
Video-based Ad Suggestions
Media system that automatically places ads based on the video's content
Textual video search
Textual Video Search
Media system that allows users to search and discover video content
Video content moderation
Video Content Moderation
Media system that automatically detects and removes offensive video material
Our Data Factory


Our Data Factory

We are crowdsourcing large-scale video datasets that explain the world in videos



Learn more
Data factory

Our Core Datasets

General Visual Knowledge
Our core dataset is used to teach machines common sense and basic physical concepts
Human Actions
Our scene understanding dataset is used to detect human behavior and complex actions in context
Hand Gestures
The world's largest video-based dataset for reading dynamic hand gestures

About us


We are a technical team that is re-defining how machines understand our world

67ad916cce
Dr. Christian Thurau
Chief Biz Dev Officer & Co-Founder
6ce7004ff3
Roland Memisevic, PhD
Chief Scientist & Co-Founder
Ce9c01a787
Dr. Florian Hoppe
Chief Operating Officer & Co-Founder
4a2f6b3659
Dr. Ingo Bax
Chief Technology Officer & Co-Founder
5ae2a456b9
Valentin Haenel
VP of Engineering
302f99124a
Susanne Westphal
A.I. Engineer
A786a6e066
Moritz Müller-Freitag
Chief Product Officer
0733766795
Raghav Goyal
A.I. Engineer
532d9013f5
Joanna Materzynska
Junior A.I. Engineer
1070d064c1
Dr. Heuna Kim
A.I. Researcher
6cd9e6577a
Manuela Hartmann
Executive Assistant & Office Manager
37f9bd2808
Fadi Abu-Gharbieh
Data Engineer
9f67d1fe10
Eren Gölge
A.I. Engineer
0de1710e7b
Héctor Marroquin
Crowdsourcing Supporter
Fc5f458c02
Farzaneh Mahdisoltani
A.I. Researcher
3381afff8a
Waseem Gharbieh
A.I. Researcher
Fc5edb05a4
Guillaume Berger
Research Intern
C0b64cf18d
Till Breuer
A.I. Engineer
74123c6884
Melanie Metzner
Executive Assistant & Office Manager
0cd79ce090
Florian Letsch
A.I. Engineer
Avatar
You?
Check our Open Positions

Advisors



Yoshua bengio
Yoshua Bengio
Montreal Institute for Learning Algorithms
Nathan benaich
Nathan Benaich
Peter yianilos
Peter N. Yianilos
CEO Edgestream Partners
World map

Contact Us



Send us a message




Contact information


Berlin

Twenty Billion Neurons GmbH
Oppelner Str. 26/27
10997 Berlin
Germany

+49 30 5564 3880 |


Toronto

Twenty Billion Neurons Inc.
111 Queen Street East
South Building, Suite 450
Toronto, Ontario, M5C 1S2
Canada

+1 647 256 3554 |