Seedance 2.0: Новий етап у генерації відео за допомогою AI
Відео обмеженого випуску Seedance 2.0, моделі AI для генерації відео, доступної у вибраних країнах через Cap Cut та Dreamina, з новими обмеженнями та водяними знаками C2PA. Також показано, як модель Uni1 від Luma Labs та її агенти поєднуються для потужного сторибордингу та робочих процесів. Apesar обмежень, презентер хвалить якість моделі та вказує на можливе розширення до глобального випуску.
Ключові тези
- Випуск Seedance 2.0 обмежений країнами: Індонезія, Філіппіни, Таїланд, В’єтнам, Малайзія, Бразилія, Мексика
- Додані обмеження: блокування відео з реальними обличчями та неавторизованою IP, а також водяні знаки C2PA
- Модель Uni1 від Luma Labs та її агенти покращують сторибординг та робочі процеси генерації відео
🟢 Можливості — маркетингові агентства можуть тепер швидко прототипувати відео‑реклами за допомогою Seedance 2.0 та Uni1, скорочуючи час від концепції до готового матеріалу на тижні до днів. 🔴 Загрози — залежність від одного постачальника та можливі зміни ліцензування або геоблокування можуть зірвати кампании, тому варто диверсифікувати інструменти та слідкувати за оновленнями політики.
Що всі пропустили — хоча модель позиціонується як «найкраща», вона все ще вимагає значної построби обробки через периодичні артефакти та неочікувані правки (наприклад, примусове додавання носатого перстня), що зменшує її «plug‑and‑play» привабливість для продакшн‑потоків. Це підкреслює, що навіть лідер‑ринок потребує людської інтервенції для професійного результату.
Опис відео▼
Well, this is turning into a busy week. Seance 2.0 has released kind of. There's a lot to unpack in the details and we're going to go over all of that, but we are headed in a good direction. Also, Luma Labs has dropped a new thinking image model. This one looks really cool. But when you pair it with another recent Luma update, well, that's where things get pretty powerful. And then you pair that with Cance 2.0, well, you're in for a wild ride. Kicking off, Bite Dance have released Sea Dance 2.0 globally, although they do have a very narrow definition of what globally means. Look, the whole Sea Dance saga has been kind of a dramatic roller coaster. Originally, it was set for release in late February, which got pushed to March, which then got pushed to mid-March, and then uh lawsuits and CNDs got involved, and well, I mean, now we're here. Along the way, there were all kinds of rumors ranging from like heavy nerfing of the model to outright cancellation. We've had some janky and questionable workarounds via API calls. And look, I'm going to be honest, that that part's not going anywhere. So, backstory lore complete. Cance 2.0 is now available both on Cap Cut and Dreamina. Although, as of this video, it has only been released to a select number of countries. I know, I know. We'll talk about how all this plays out in just a minute. But yeah, for right now, uh it is available both on Cap Cut and Dreamina to users in Indonesia, the Philippines, Thailand, Vietnam, Malaysia, Brazil, and Mexico. And as they note with expansion over time. Now, why those particular countries? I mean, I presume it's threading the needle over the IP issue. Uh these are countries where it can be released while they shore up more protections. In the meantime, they actually have released some new guardrails, uh, including restricting certain capabilities, uh, videos containing real faces. We're going to talk about that in a minute because I I don't think that's as bad as everyone thinks it is. Uh, they've also, you know, put in some technology to block unauthorized generation of intellectual property. So, no more Pokémon for you. And they have also implemented C2PA content credentials. This is uh the invisible watermark that is generated alongside videos that indicate that it is AI generated. Uh both uh VO and Firefly use it and a number of the other video generators do as well. I don't have a problem with that at all. So all that said, it will be rolling out to more countries. They have said uh with expansion over time. I'm hearing weeks, but you know, I've heard a lot of things over the last few months. Now, can you get around that region lock with a VPN? I mean, I don't know. Maybe. Uh, but it also might just be worth saving a headache and waiting a little bit to see when API or larger access rolls out. So, despite the rumors that Cance 2.0 had been cancelled, that I mean, that was never true. We will eventually see a Western release of it. They didn't spend millions of dollars on this model to throw it in the trash bin. And I don't think that we're going to be seeing like an Eastern versus Western model. All that said, I mean, I guess with official release, that finally means my gag order is off and I can finally talk about it. I do want to note that this isn't like a full review. This is kind of more of a sampling and overall thoughts. Actually, tomorrow I'll be releasing a short film made with Cance 2.0. So, do make sure you like and subscribe for that. Um, I haven't said that in a while. And that will be followed by one of the like masterclass breakdowns that I commonly do where I'm going to go over the model and uh the workflow in much more detail. So, kicking off on the Dreamina side text video. Uh, fairly detailed text prompt here. I'm going to go over prompting in that masterclass thing we'll do later on this week. In the meantime, uh, let's let's take a look at this western scene. >> The man I'm looking for, you'd remember him. He has six fingers on his right hand. >> Six fingers? No. Can't say I've seen anyone like that. Mister, the man you're looking for, he's the sheriff. Little twist there and a subtle Princess Bride reference. Moving on and checking in with our guy Brad, who he's been going through some stuff, still standing in the rain. Uh, once again, text to video. >> Julia. Julia, please just give me one more chance. >> Brad, you need to give it up. It's been 10 years. I'm married. I have two kids. >> So, there's still a chance. >> We really need to take Brad out for some chicken wings soon. and he's been kind of a mess lately. Um, now one thing I've definitely noticed on the text to video side, I mean, clearly we see that we are, you know, getting realistic generations of characters, but I have been noticing that we are getting like prompt push back on certain things. Uh, like especially on the text to video side, you'll get things like this uh video may contain inappropriate content despite the fact that it's the same prompt. My guess is that now the model is kind of double-checking itself and making sure that it's not generating up a famous face. So, that means that this isn't so much of an issue when you're using image to video. Uh, for example, this image of our FBI agent uh drinking coffee at a Pacific Northwest diner. >> This is a damn fine cup of coffee. But how's the pie? >> It's really good. It's an old recipe from the log lady who got it from the owls. >> We have run that prompt a lot. And I got to say, I mean, that is by far the best version of it that I've seen yet. Now, images with famous faces, n not quite so much. Longtime channel viewers will know my obsession with uh this project uh where I recast Bruce Lee in the original Terminator movie and in the previously unhinged version of Cance 2.0. I was able to get this generation off. >> I'm here to see Sarah Connor. If >> you can't see her right now, she's making a statement. I'll be back. >> Man, I wish I had time to get more than one scene off on that. Uh, but as we can see now, like that that's not happening anymore. And look, while that might be a bummer to some, I mean, we also have an unlimited universe of characters that we can create. Like it doesn't have to be Leonardo DiCaprio. Now, is the model perfect? Uh, no. But I would say it is pretty good. Generally, I'll say that you'll probably find what you're looking for within one or two generations. I very rarely end up with morphing or decoherence issues, although occasionally it'll do, you know, kind of like wacky stuff like this. It will occasionally take some liberty in the editing. Uh, in this case, I ended up utilizing a storyboard and then uh the just the model image reference of Flamethrower Girl, throwing them together in the Omni model, and well, we ended up with this which is really good considering, you know, the what I gave it to work with. I will say like there are some unmotivated jump cuts in there. Um, and then the other thing is that it seems to really want to put a nose ring into flamethrower girl pretty often. So, uh, that's something I often have to negative prompt for. The omnimodel will also occasionally like change the facial characteristics of your reference character. Um, that said, I did find a way around this. Again, we'll talk about that in the masterclass video. But again, in terms of morphing and like, you know, decoherence, I really do not run into it that much. Uh, this is a shot, I mean, and this is a pretty boring shot in all honesty, uh, that, uh, I was using for another project. Again, it's a very boring shot. It's just, you know, a ship and then like they're pulling up a buoy. But like there's nothing wrong with this shot. That's that's the kind of stunning part about this. This is like a wer. And when you do run across something like quote unquote wrong, uh you can usually mine from it and take little snippets of it to use in the edit. Over on the cap cut side, at least on the web app, they've uh they've been taking a different approach. This is much more of a canvas-based, so no nodes or like spaghetti noodles. Um so again taking you know one of our uh storyboard features here um the omni model is available here. So all you have to do is like highlight both of them and you'll see they'll autopop populate down here and you can write your prompt and that is like essentially the the omni model. Um so in here you would you know select it over to video and just make sure that crad 2.0 is here and then whatever your aspect ratio and your timing is. I will say that I am having a lot of fun with this storyboard approach. I mean, it does go a bit off the rails there in terms of like, you know, odd cuts and whatnot, but for the most part, everything stays pretty coherent. So, it's just a matter of taking that output and kind of cutting around uh the stuff that you don't need. Again, I I mean, I'm being pretty lazy with prompting here. We'll see a lot more intentionality in like the bigger project. But, u you know, this canvas approach kind of leads us into our next section, which is still Cance related. So, Luma have released their new image model, Uni1. This is an auto reggressive thinking image model. Uh kind of again similar to how seance works. We talked about that a few videos ago. Or I mean you can really think of it as like how nano banana works. It's a model that can lean on internal and external knowledge uh to well I mean think about its composition and kind of double check its work. Now, there is obviously a lot that you can do with this, and actually I have not had the time to run it through its paces because, well, I've been heads down in this other project. But where I think that Uni1 gets really crazy powerful uh is when you utilize it along with the uh fairly recently introduced uh boards and agents feature that Luma dropped as well. Uh as you can see here actually just zooming in here playing around with uni1 uh it does pass the wine glass filled to the top at 230 clock. So yeah I mean it's a you know again it's a thinking model but again where things get pretty interesting to me is uh just taking our base flamethrower girl image and then uh I ended up having a chat with it and just saying like let's create some storyboard. So the agent actually just kicked out onto its own built out a sketch version rotation of our character. uh even actually ended up kind of generating up like a small story line for us. What's interesting here is that the uh the agent anytime that you're referring or prompting to the agent, it it's aware of this document. So, it'll kind of go back and check out its own work. Uh from there, it started generating up some, you know, storyboard ideas for us. Uh and then as I prompted, I was like, let's I'm a little bored of like just like nothing here. So, I talked about new characters, which it was happy to generate for us. and then sort of new storyboard sequences based off of those characters. From there, I just pulled out my own selects and uh ran actually ran it through seance um just to get some more still frames here and then once again into luma agents for some Omni1 versions of essentially our you know our same story line. What's kind of neat about this is that we can, you know, continue to chat with the agent uh and get more UNI1 angles on this or actually Nano Banana is available here as well as we'll see in just a little bit. Um, you know, yeah, there's just a lot of potential here. Then from here, you can generate well, I mean, they don't have CDS 2.0 here yet. I I presume that will happen once the API goes wide, but you can generate in cling, which is still a very very good video generation model. So, uh, that is, you know, one version of this workflow. Now, that's kind of just baseline, uh, for this thing because, uh, well, as we can see here, I actually utilize it again on the thing that we'll be seeing in in a little bit. Um, so yeah, me me and this agent, uh, thing, we we got to become pretty good friends. Um, yeah, this is pretty powerful. And listen, I don't want to necessarily oversell anything, but I mean, when you take a thinking image model and combine it with a thinking canvas and then run that through a thinking video generator, I mean, it's it's pretty impressive. So, at the end of the day, is Cance 2.0 like the greatest video generator known to man? I mean, currently, yes. In fact, actually they just released it onto the artificial analysis leaderboard where I, you know, it clearly took number one, further indicating yes, it will get, you know, a giant global release. Is it the only video generator in the world? No, it is not. In fact, Cling 3.0 is still really good. You know, the boys at Google are cooking up V4. Really excited about that. And we're on the eve of WAN 2.7. So, although it is really good, it still requires getting in there and editing because, well, I mean, frankly, it's going to make some weird choices from time to time. So, I'll keep you guys informed on when we get wider release news on Cance 2.0 or, you know, if there are legitimate ways to access it outside of like janky APIs. I always kind of caution against those just because I mean, you never really know what you're getting into and that plug can be pulled at any moment. Uh, in the meantime, I guess I'll be back tomorrow with the short film that I've been like work. I'm really pretty proud of this one. So, uh, hope you guys get a chance to check it out. As always, I thank you for watching. My name is Tim.




