Review

Korridor-Crew reimitiert Matrix-Bullet-Time mit KI

  • Updated December 24, 2025
  • Jason Kelly
  • 47 comments

Zwei Jahrzehnte nach der Veröffentlichung von *The Matrix* hat ein Team den ikonischen Bullet-Time-Effekt mit einem Bruchteil der ursprünglichen Ressourcen neu geschaffen. Während die ursprüngliche Sequenz 120 teure Filmkameras, ein mehr Millionen Dollar umfassendes Budget und fast ein Jahr Postproduktion erforderte, erreichte das Corridor-Team ein ähnliches Ergebnis mit nur zehn Handycameras und einem einzelnen 5090-Arbeitsplatz. Der Schlüssel zu dieser modernen Nachbildung war die kluge Anwendung von KI-Tools für Videointerpolation, Inpainting und 3D-Editierung. Dies zeigt, wie KI die technischen und finanziellen Hürden für die Erstellung komplexer visueller Effekte deutlich reduzieren kann, wodurch etwas, das einst eine monumentale künstlerische Leistung darstellte, viel zugänglicher wird.

Choose a language:

47 Comments

  1. This shot specifically avoids rotating around the actors by using 96 separate film cameras, as they determined the G-forces from a moving camera would have destroyed it. Film sets can certainly be dangerous places.

  2. These creators are impressive. It’s unfortunate they faced so much harassment and death threats years ago over their AI-animated rock paper scissors short film.

      1. The video was well-received by general audiences, but it also drew intense backlash from critics of AI. Those critics directed significant anger, threats, and harassment toward the creators, attempting to harm them and their channel.

  3. It’s frustrating when people claim “this is what AI should be used for,” as if dismissing the many other valid applications. AI doesn’t need to please everyone, and we shouldn’t water down our advocacy just to appease critics. Instead, those opposed to AI should specify which uses they find objectionable and explain why. The burden of proof lies with them.

    1. I’m not saying this is the only use for AI. My point is that it dramatically reduces the resources and equipment needed for technically demanding shots. It’s surprising to see people criticizing or questioning that. The title was limited to 300 characters, so I may not have communicated it clearly.

    2. This expectation is unreasonable. It’s like saying canvas and paint should only be used for a masterpiece like a Rembrandt, never for the practice pieces that build the skill to create one. The technology is the same; you can’t go from A to C without passing through B.

    3. For every harmful application of AI, there is someone who has used the same technology to benefit humanity. Critics often dismiss these positive uses, claiming they are not equivalent.

    4. The statement “this is what AI should be used for” is accurate here. While AI models were used, I would still classify this as a human-made project. If someone had simply prompted a model and received this output, it would not be.

      I think the “suck it up” attitude is less palatable when discussing technology that could erase many jobs—the very work people must do to survive in our current economy.

    5. Sorting through trash is also a key purpose of AI, and while I love art, that application is far more important. AI can be used for millions of things, and its implementation will continue to grow in that direction.

    1. Watch the main video from Corridor Crew.

      Consider how you would shoot this yourself. It’s not about green screens—removing the background is the easiest part.

      This is a slow-motion shot, so you need to capture many images quickly, around 100-200 frames in just 0.2-0.3 seconds. Few cameras could shoot that fast back then. Additionally, the camera moves around the subject during the shot, covering 5-10 meters (20-30 feet) in that same brief time. That requires the camera to move at about 110 mph or 180 kph, which is difficult even with modern robots and dangerous around actors.

      Instead, they used multiple static cameras triggered in sequence. The challenge is that everything between the cameras is missing and must be filled using the images from adjacent cameras. Larger gaps between cameras require more frames to fill, which is why they originally used 120 cameras, though gaps still remained.

      Initially, they simply blended the two images together, but the results were poor. The breakthrough came with optical flow, which improved motion prediction but still didn’t understand limbs or body parts.

      Around 2020, GAN-based machine learning interpolation allowed generating up to 16 in-between frames directly from two images.

      Today, AI video can create several seconds of footage—hundreds of frames—from minimal input. This dramatically reduces the gaps in footage and cuts the number of cameras needed down to just a few, like the 10 phones they used.

  4. This is an excellent case study for the potential of diffusion models. I’ve been saying for a while that this technology will streamline and democratize many traditional film techniques. It’s great to see it done in practice.

  5. I wonder if they are using a public ComfyUI workflow. They didn’t mention their use of AI much, likely because they have received death threats for it in the past.

    1. They included the full workflow in the description, which is great. It’s a shame they avoided using the term “AI” as much as possible, but that’s likely a necessary choice for revenue, even though the techniques are fundamentally the same. It’s an unfortunate reality of catering to public perception.

Antwort hinterlassen