welcome everybody who listens and tunes into the show. Thursday. ai is your latest and greatest news about what happened in the AI last week. And, we're here to talk about multiple things today. There's a bunch of open source. there is an open AI live stream that's happening at 10 a. m pacific, which is in an hour and a half. we are obviously going to restream this I know what it's about, and I'm going to tell you, I got a little bit of a sneak peek interest in it and it's so dope. besides this though, we have other breaking news here, soon to come as well. I'm not going to tell you what those are about. You have to stick around with us, I just want to mention for folks who are joining us for the first time that, last week we celebrated two years of the live stream that was awesome. I really enjoyed it. I like the output of love that people said, and like, congratulations that people sent me. It was awesome to see. it's just getting started. the speed with which the AI models are releasing is ever increasing. We have a bunch of guests as well. So I'm very, very excited to tell you about all these things. And, I think it's time for like a quick TLDR of everything we're going to talk about. because I guess that's how we started the show. So let me, let me just share this with folks who are watching, but folks who are not watching, you can just tune in. basically we started the show with a TLDR. I had some feedback from folks who were like, Hey, don't tell us about what you're going to talk about. Just talk about this. And I disagree. I disagree because we are here for two hours. Not many people have two hours to sit on a Thursday. Many people just want to know what's going on. And then if interest peaks and they want to just stay with us. So I disagree. But I appreciate the feedback. so let's do the open source TLDR section. So in the open source we have, a great release from Mistral. Folks, Mistral is back with open source. Let's go. Applause. Yeah, let's go Mistral. we're very much welcoming. Back to the stage Mistral's models with an Apache 2 license their multimodal Mistral small 3. 1 24 billion parameters that fits on a single GPU. I guess was released Wolfram you play with this I'm assuming we're gonna chat about Mistral coming back to the land of open source. Ah a very interesting release from the fridge maker LG that makes great fridges By the way, my screen here. My monitor here is also an LG monitor. They have an AI side, apparently, I didn't know about this and they released a new LLM called X01. It's a thinker also. So X01 deep, a 32 billion parameters thinking model. I actually had a chance to test it. ExaOne was released, from LG, from all places, so we have like a new, new entry to the stage of LLMs. It's very interesting. ByteDance released something called DAPO, D A P O, which apparently is better than GRPO that DeepSeq released, which is a RL training method, really reinforcement learning training method. that's going to be very interesting to look through, And then, folks, finally, Finally, we got some open source from NVIDIA for the second time. NVIDIA drops, because GTC happened this week. NVIDIA finally drops Llama Nematron. they dropped Nematron Super and Nematron Nano. and, we've been talking about these models from the last time that Jensen was on stage, and now, Jensen was on stage again, and now they actually dropped the models, the models look pretty good, so we're gonna chat about them as well. I invited NVIDIA friends, but a formal request to the NVIDIA PR department. We love NVIDIA! We absolutely love Nvidia, all of us. Please let your folks come to the show. Okay. That's all I say. but I did request some folks to come to the show and hopefully the PR department will, will understand that promoting their stuff on Thursday is a good idea. in the big companies, it seemed quiet this week. It looks like nobody wanted to step on the toes of Jensen's performance. But, Google released a few things that are interesting in the Gemini app. We released deep research for free. Now everybody can access deep research, which is great. they also released Canvas, which is super cool. you can now build apps within Gemini and actually see them. so we're going to chat about this and also they have a bunch of other stuff, to mention. Not to mention that last week we told you about the native image output of Gemini. And it's been blowing up all over my timeline. It's incredible if you haven't tested it out. OpenAI just yesterday decided to say, Hey, remember that pro tier level of a model that we have with like extra, extra compute? we're going to make this available in the API for oligarchs only because it costs a ridiculous amount of money. OpenAI makes O1 Pro, which is not a model, which is Something else we try to understand what exactly this is, but not a different model. It's 01, but probably a lot more compute and resources. They make this available in the API, and, and it costs 600 per 1 million tokens, 600 for outputs for 1 million tokens outputs. If you code with this, if you vibe code with this, you better already have VCs backing your startup because it's ridiculously, ridiculously expensive, but hey. Maybe some people want it. Definitely some people ask for it. It's way easier to use repo prompt and just go to OpenAI Pro for 200 bucks a month. we have a quick, NVIDIA GTC recap that I hope we will test out. We're moving on to vision and video step fun, a company that already gave us like a great model for video. They released their image to video model, upgraded from their text to video model. looks pretty good. They claim state of the art, but it looks pretty good. just the breaking news announcement. AI breaking news coming at you only on Thursday night. So the breaking news that we have. Just literally just before starting the show is that open the eyes live stream is going to go live at 10 a. m. And the only thing that we can talk about publicly right now is they released a. Tweet and in that tweet there's a video in that video. There's a voice that says live stream at 10 a. m. Very Empathetic. I don't know. So there's gonna be a live stream at 10 a. m We're gonna restream this folks so you don't have to miss anything You can be on Thursday. I and also enjoy open the eyes releases, though. I will say They tend to go long. So sometimes they have to announce like one thing and then they're like, oh, let's talk about this for 15 minutes. We don't have that much time on the show to give them 15 minutes. We'll see. We'll see how that goes. But, I will tell you though, I will tell you, given that I had a little bit of a Preview of what's going on in this live stream. I already have queued up a guest who's going to come and talk to us about the implications of what they're going to release. So that's a little sneak peek for you. I'm very excited to be able to have this, because, the more time we're here on the show, the more kind of, we get, to see a little bit before, and I love this. And also for everybody who's listening, we respect embargo with a strict respect. And so if you were to give us some news, we will only talk about this when it's public. Speaking of public, in the vision and video category, we will have another breaking news with another guest, very, very interesting guests, this week. Also in voice and audio and I think Wolfram you had a chance to test this out some a company called Canopy Labs I've never talked about on the show before I never heard about on the show before released Orpheus 3b It's a natural sounding speech language model. It's not just a DTS. It's an actual Model that's built, I think on llama 3. 1 or 3. 3 and, they trained like a TTS into that thing. So you can actually ask it for directions. You can ask it for a bunch of stuff. It's 3 billion parameters and they're going to release the tiniest version soon as well. it has zero shot voice cloning, which I tried to run and, CUDA ran out of memory for me. But this is so cool. This is like a state of the art, TTS. Language model, basically, that you can prompt to talk to you and multiple things. And just before the show, NVIDIA open sourced Canary, which is a 1 billion parameter and 180 million parameter speech recognition and translation, so basically like whisper competitor that takes like number two on the speech arena based on vibe of a friend from hug and face. So a lot of news in the voice and audio, a lot of news, including some upcoming news very soon. I want to talk about this, this segment. We don't usually cover AIR diffusion in 3d, but. Folks, this week we're going to cover this. Tencent released Hunyuan 3D. Do you guys remember Tencent? Of course you remember Tencent. One of the best performing video models right now is Hunyuan Video. And they have been just on a tear of releases lately. And this is also, I believe, open source. And Hunyuan 3D Version two is an absolute state of the art 3d model right now. Not only that, They released a 3d, multi view, which basically, if you imagine, we're going to cover this, but basically a state of the art 3d model, let's, let's keep over the details in the TLDR is, coming at us from Houdini on 3d. NVIDIA also released something called Cosmos Transfer. they dropped a conditional world generation with adaptive multimodal control. It's a mouthful and we'll see if it's going to be interesting to talk about. And I think we're coming up on the end of our TLDR and in the tool section, we have, our friends from RC, A R C E E, our friends from RC did like a bunch of fine tools, did merge kit, did like a bunch of great stuff, our friends from over there, and we'll have Lucas on, hopefully, to talk about their Conductor, Conductor is a new release from RC that is a model router, routing your requests to different places, so that's going to be awesome. And then also in tools, Cursor ship something called Cloud 3. 7 Max that many people were concerned is like a new model, but it's basically just like unlimited Cursor 3. 7 with a bunch of reasoning tokens. Notebook LM is about to release a new feature called Mind Maps, which I find super, super cool. You just upload a bunch of stuff and you'll just create Mind Maps for you. You'll be able to create your own now. And. The one of the funnest things that I've seen this week is using a Gemini, code drawing, which is, which is basically using the Gemini image output native features and you can just draw with it together and we're going to play around with this. It's super cool. I think that's it for the TLDR folks. There's a busy week for AI this week. Busy week. There's a bunch of stuff, not to mention that nobody wanted to go live because GTC, nobody wanted to outsend Jensen. I will just ask super quick. Folks, how are you feeling about this week in the eye? Let's get started or am I, am I missing anything? This thing Wolfram and also folks who are listening to us, please feel free to leave me a comment about, what, what we just released and yeah, we have some comments already, but if you have anything that we haven't mentioned in the TLDR, please, please add a comment. We'll absolutely get to it as well.