{"id":38412,"date":"2025-12-20T16:33:41","date_gmt":"2025-12-20T16:33:41","guid":{"rendered":"https:\/\/naijaglobalnews.org\/?p=38412"},"modified":"2025-12-20T16:33:41","modified_gmt":"2025-12-20T16:33:41","slug":"disney-and-openai-signal-the-arrival-of-ai-video-streaming","status":"publish","type":"post","link":"https:\/\/naijaglobalnews.org\/?p=38412","title":{"rendered":"Disney and OpenAI Signal the Arrival of AI Video Streaming"},"content":{"rendered":"<p>\n<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Recently I looked up the earliest surviving motion picture, Roundhay Garden Scene, which dates back to 1888. Four figures, two men and two women, walk around a yard with quick, jerky steps. It lasts about two seconds.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">I also recently watched some clips made in 2016 by researchers at the Massachusetts Institute of Technology and the University of Maryland that are among the first fully artificial-intelligence-generated videos. Each is about a second long. In one, a blurry figure stands on a golf green, bent at the waist to putt. No one would confuse these videos or Roundhay Garden Scene for the slick realism of contemporary cinema. And just as skeptics often deride AI video as wasteful, 19th-century critics dismissed early cinema as a \u201cfoolish curiosity.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Yet a recent agreement between Disney and OpenAI offers a glimpse of a different future. Starting in early 2026, the tech company\u2019s video generator Sora will be able to create videos featuring more than 200 characters from Disney, Marvel, Pixar and the Star Wars franchise. And Disney+ will stream a selection of user-made clips.<\/p>\n<h2>On supporting science journalism<\/h2>\n<p>If you&#8217;re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Disney will also invest $1 billion in OpenAI and use its tools to build \u201cnew experiences for Disney+ subscribers,\u201d according to a Disney and OpenAI joint press release. In announcing the partnership, Disney CEO Robert Iger said that the company would \u201cthoughtfully and responsibly extend the reach of our storytelling through generative AI.\u201d He also said in a recent earnings conference call that he intends for subscribers to create content within Disney+ itself. If you want to watch Elsa and Cinderella take down Maleficent, you\u2019ll be able to ask for the scene\u2014though it may last only 20 seconds.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">If this is the start of AI TV on demand, I wonder how long it will be until these clips reach 20 minutes or an hour, given the environmental burden and the computing costs. Plenty of people believe it\u2019s impossible, but I imagine that few of those who watched Roundhay Garden Scene foresaw The Great Train Robbery, a 12-minute milestone of silent cinematography from 1903, much less Gone with the Wind\u2014or streaming.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">The challenge of image generation lies in how today\u2019s systems work. They are built on diffusion, a technique that begins with \u201cnoise\u201d that is gradually refined into an image. Picture an image of a person standing in mist. The AI essentially removes the mist and puts in new pixels in repeated passes until a coherent figure appears. Each pass to refine a generated image increases the cost.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Video is even more challenging. The series of images must be coordinated so that facial features don\u2019t change and coffee mugs don\u2019t vanish. In one second of high-definition video, millions of pixels are changing. During a keynote speech at a hackathon hosted by AI community hub AGI House, Bill Peebles, an OpenAI researcher who helped develop Sora, said, \u201cWe discovered how painful it is to work with video data. It\u2019s a lot of pixels in these videos.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">To manage the pixels, OpenAI\u2019s system compresses video to a simplified version that keeps crucial information. It then treats it like a loaf of bread\u2014slicing it into frames that it then divides into cubes. This allows the model to coordinate all the cubes with each other, much as the models that power ChatGPT relate all the words in a response.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">The leap from seconds to minutes is so punishing because the more frames you add, the more information the model has to keep in view. As videos get longer, inconsistencies accumulate. True \u201con-demand\u201d AI TV would also require cuts between scenes. If every Disney+ user were requesting it with near-term technology, the costs would be staggering.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Researchers have been hunting for more efficient approaches. One is for the model to break the job into stages. \u201cInstead of denoising or generating the whole video all at once, you generate frame by frame,\u201d says Tianwei Yin, a research scientist at AI image editing start-up Reve, who co-developed the CausVid video-generation software. \u201cAt each step, your compute is limited to a much smaller portion instead of the full thing, and this enables you to go much longer.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Yin believes that systems will more efficiently reach five minutes of generation by next year and that, through the integration of different existing AI technologies, they could reach an hour not long after. Others have echoed this optimism. In a recent BBC interview, Google CEO Sundar Pichai described the possibility of high school students making feature-length AI films in coming years. Crist\u00f3bal Valenzuela, CEO of the AI-video-generation company Runway, told El Pa\u00eds earlier this month, \u201cHaving 60 or 90 minutes with consistent characters and story still isn\u2019t possible. But it will be soon.\u201d He went on to say that watching AI videos as they are generated in real time is also on the horizon.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">The road from curated fan clips to feature-length films will pass through some unglamorous innovations, not to mention negotiations over how to pay the creatives whose work feeds it. And though the financial burden of AI videos seems prohibitive, millions of people globally are involved in producing and training AI models, and the costs of technologies usually decrease. For instance, bandwidth was prohibitively expensive in 1998\u2014it cost about $1,200 per megabit per second (Mbps) monthly for large networks\u2014but by 2025 the lowest reported cost was $0.05 per Mbps monthly, a 99.996 percent decrease. This change made streaming on Disney+ or Netflix possible.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">The cultural path of new mediums is far harder to imagine, and resistance is often intense. Poet Charles Baudelaire railed against photography in 1859 for its lazy realism that dragged art away from the imagination. In past centuries, \u201csceptics and partisans both compared photography to painting, and moving pictures to theatre,\u201d wrote present-day scholar Reuben de Lautour. We appear to be in an even more complicated moment. What seems certain is that, as in the past, technology will rapidly evolve, allowing millions of creators to test possibilities we can\u2019t yet predict.<\/p>\n<h2 class=\"subscriptionPleaHeading-DMY4w\">It\u2019s Time to Stand Up for Science<\/h2>\n<p class=\"subscriptionPleaText--StZo\">If you enjoyed this article, I\u2019d like to ask for your support. <span class=\"subscriptionPleaItalicFont-i0VVV\">Scientific American<\/span> has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.<\/p>\n<p class=\"subscriptionPleaText--StZo\">I\u2019ve been a <span class=\"subscriptionPleaItalicFont-i0VVV\">Scientific American<\/span> subscriber since I was 12 years old, and it helped shape the way I look at the world. <span class=\"subscriptionPleaItalicFont-i0VVV\">SciAm <\/span>always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.<\/p>\n<p class=\"subscriptionPleaText--StZo\">If you subscribe to <span class=\"subscriptionPleaItalicFont-i0VVV\">Scientific American<\/span>, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.<\/p>\n<p class=\"subscriptionPleaText--StZo\">In return, you get essential news, captivating podcasts, brilliant infographics, can&#8217;t-miss newsletters, must-watch videos, challenging games, and the science world&#8217;s best writing and reporting. You can even gift someone a subscription.<\/p>\n<p class=\"subscriptionPleaText--StZo\">There has never been a more important time for us to stand up and show why science matters. I hope you\u2019ll support us in that mission.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Recently I looked up the earliest surviving motion picture, Roundhay Garden Scene, which dates back to 1888. Four figures, two men and two women, walk around a yard with quick, jerky steps. It lasts about two seconds. I also recently watched some clips made in 2016 by researchers at the Massachusetts Institute of Technology and<\/p>\n","protected":false},"author":1,"featured_media":38413,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[58],"tags":[12580,3707,1430,9418,2781,94],"class_list":{"0":"post-38412","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-science","8":"tag-arrival","9":"tag-disney","10":"tag-openai","11":"tag-signal","12":"tag-streaming","13":"tag-video"},"_links":{"self":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts\/38412","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=38412"}],"version-history":[{"count":0,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts\/38412\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/media\/38413"}],"wp:attachment":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=38412"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=38412"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=38412"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}