When humans solve intelligent tasks, they rely heavily on their common sense knowledge about the world. A detailed understanding of the physical world is however still largely missing from current applications in artificial intelligence and robotics. Our mission is to change that. We are developing new, ground-breaking technology that allows machines to perceive the world like humans.
Our technology can analyze video and extract meaning from it in real-time. We build deep learning systems using our large video datasets about common world situations and then fine-tune them to specific use cases with minimal effort.
One of the limiting factors for advancing video understanding is the lack of large and diverse real-world video datasets. To circumvent this bottleneck, we have built a scalable crowd-acting platform and have created some of the largest industry video datasets for training deep neural networks.
Our deep neural networks are pre-trained on our datasets of crowd-acted videos. The datasets contain short video clips that show a wide range of physical and human actions. We then transfer the capabilities of our trained network to contribute to specific video applications.
We pre-train deep neural networks on a foundational dataset for understanding physical actions. A large amount of annotated video data is required so that the models develop an internal representation of how objects interact in the real world.
We transfer this intrinsic knowledge to solve problems of high complexity that rely on an understanding of these fundamental physical concepts. The data requirements are drastically lower, allowing us to solve a large variety of video use cases.
Our machine learning systems excel at deciphering complex human behavior in video
We are a technical team that is re-defining how machines understand our world