Gauss algorithmic, a company I’ve built together with my friend Jiri Polcar, has been fortunate enough to work with one of the world's largest gaming publishers, and while we cannot speak much about this case right now, it has given us as AI practitioners an exclusive opportunity to see how our favorite video games are developed. During this process, we discovered that it's both amazing work, but also extremely time-consuming. A single game typically takes multiple years to complete and release. From what we have seen, a lot is simply down to the lack of intelligent automation options. The automation tools that somewhat work have mostly been developed by game developers themselves and solve isolated problems.
We saw that we could seize the opportunity and create AI-powered tools for game developers. When developing a product, a key to success is finding the right focus. You cannot do everything. We decided to focus on motion capture cleanup. This is a simple and understandable problem, yet it’s really hard to automate.
What is motion capture and how game developers use it
Most top-level developers use an optical motion tracking system. In simplicity, high-speed cameras are used to track back light beams reflecting from coated plastic balls (markers) attached to the element we want to track motion. This is the most accurate technology at this point in time and has been heavily used in some of the best AAA Games and blockbuster movies. The challenge with this technology is that markers are not visible all the time and the system loses its understanding of that point in 3D space. This can lead to various deformations of the movement and the character movement can look unnatural.
A team of professional technical animators (sometimes called motion editors) is then deployed to fix these issues, but here’s where the catch is. It actually takes longer to clean up the data than to acquire it. We’ve had the possibility to talk to tens of motion capture professionals and came to an interesting metric.
1 hour of raw captured motion takes roughly 1 day to shoot and 5 days to clean up
That absolutely shocked us, but as well emphasized how big this opportunity is for AI. In fact, we could be automating a very large and very expensive amount of work that creative professionals do want to do.
A senior VFX supervisor mentioned, that daily production costs for a large studio can easily jump over $10,000 a day. But the main problem isn’t really the money, it's the fact that it’s still largely wasted on the flaws of state-of-the-art technology. Not to mention that cleanup is a task most animation teams hate. When we talked to professionals, it almost felt like a choir your parents gave you for misbehaving.
Our market research on MoCap
With our experience, we understand that no one can create a successful business/product without market validation. This is an important first step before you get too deep in development. Thanks to our professional services work we are sure that deep neural networks are capable of replicating some of the work motion professionals do, but this is just a single customer. Now, you can go out and buy some market research conducted by people who do the same thing in 40 other industries, but the more you dig in the more confusing it becomes. Market sizes in the reports we found ranged between a $150mln to $26bln dollars. The best way to do this is to experience the market firsthand as much as possible. Stop sitting behind google and go out into the field. Our task here became clear: talk to as many potential customers as possible and try to understand if they have similar problems and how they are dealing with them.
Through small prototypes showcased at trade shows like GDC or Game Access Conference, directly contacting professionals online, and even discussing with existing technology suppliers, we saw that professionals were dealing with the problem (on different levels) everywhere. We also received organic conversions on our website kapnetix.ai, where teams expressed their frustrations with the mocap cleanup process. Some developers also introduced us to their friends and colleagues from the movie industry and while the process is different, time spent on cleanup is still significant.
If my team has a weakness, then its creating product names 😅. Kapnetix is a created from the phrase "capture with neural nets kinetics". Working day and night on getting this spinoff, my (at the time 7 year old) daughter volunteered to help. I gave her the logo design. I loved the result and colours and gave it to a pro designer to "polish up". The result is the logo and colours you see on this website 😎.
The team is working on the development of the product, but we are still heavily investing in research. We're playing with everything from generative AI models to depth sensing. Our mission is to create AI tools for VFX professionals so they have time to focus on the creative aspects of their work. They can focus on the experiences we love in movies and games more rather then fixing flaws of todays motion capture systems. We're all in on this journey and we'd love to hear your experience with motion capture cleanup.
We're also looking for early adopters. If you feel this is something that would benefit your work and have the space to experiment, then do reach out and we can plan exclusive early access for you.