
Digital Nexus
Dive into the thrilling world of data, digital, and AI with your superhero hosts, Chris and Mark. These dynamic duo consultants have built digital wonders in Australia and beyond. They wield their innovation powers at Digital Village and NotCentralised, respectively, bringing you the news, views, and opinions that are simply out of this world.
Mark Monfort, the tech wizard behind the @AusDefi Association and NotCentralised, isn't just a name—he's a legend. With blockchain fin-tech victories under his belt, he's now on a quest to build the ultimate #LLM, SIKE.ai, enhancing business workflows and securing data like a true digital sorcerer. Nothing can stop him!
Chris Sinclair, the design guru and UX/CX mastermind, knows the secrets of digital innovation and business strategy like the back of his hand. Partnered with Digital Village, a league of specialists leading the charge in product development and innovation, Chris is here to prove that the old ways of working are no match for the future!
Get ready for epic discussions, expert perspectives, and a sneak peek into the future of digital innovation. Don't forget to like, subscribe, and stay tuned for more episodes as we explore the frontiers of technology with a dash of humour and a whole lot of superhero flair...or fails!
Digital Nexus
Ep 30 | Google's AI Shockwave, Claude 4 & Agentic AI Explained!
📍 New studio. Bigger ideas. Massive week in AI.
Mark & Chris celebrate the “Dirty 30” of Digital Nexus in style—recording from a new space powered by FEX Global—and dive headfirst into the wildest week of AI news we’ve seen yet.
🔥 In this episode:
- Google’s AI Overload from I/O: Gemini’s supercharged updates, life-like video generation with Veo, try-on AI in Google Shopping, new coding tools & search redefined
- OpenAI's big moves: Codex evolution, a creative collab with design legend Jony Ive, and a shift away from wearables
- Claude 4 is here – smarter, more aware, and refusing to play dumb
- Agents vs. Agentic AI – What’s the difference, and why does it matter for real-world workflows?
- AI in therapy? Exploring the ethics of chatbots providing emotional support in Asia
- Startups to watch: LM Arena grabs $100M, Meta’s open-source push, and Manus AI’s image-gen agent revolution
💡 Quote of the week:
"Change is hard, but this is exciting change."
🔗 Whether you're a founder, technologist, or just trying to keep up, this episode breaks down what’s now, what’s next, and what actually matters.
🎧 Listen now and join the AI ride. It’s only getting faster.
Other Links
🎙️our podcast links here: https://digitalnexuspodcast.com/
👤Chris on LinkedIn - https://www.linkedin.com/in/pcsinclair/
👤Mark on LinkedIn - https://www.linkedin.com/in/markmonfort/
👤 Mark on Twitter - https://twitter.com/captdefi
SHOWNOTE LINKS
🔗 SIKE - https://sike.ai/
🌐Digital Village - https://digitalvillage.network/
🌐NotCentralised - https://www.notcentralised.com/
YouTube Channel: https://www.youtube.com/@DigitalNexusPodcast
X (twitter): @DigitalNexus
Untitled - May 25, 2025
00:00:00 Chris: Welcome back for another episode of the Digital Nexus Podcast. We are episode thirty We're old. We are old in AI years. Every time you've done thirty of something. That's great. Yeah, that's it. But as a birthday present to ourselves, we have a new location, a new setup. We've got some great tech and what a week of news. Yeah, it's the perfect time for it because like we're here at effects global where we're doing a blockchain podcast with those guys as well. But then we're here for the digital Nexus one. Um, all the professional equipment and stuff and just with all the news updates, especially Google, Google, Google. And also I think. Nothing about Google. I think the first twenty big things is like Google. Google. Google. But yeah, a little bit of open AI, a little bit clawed and anthropic. And so check that out inside this episode. Yes. Hear all about the knee jerk reactions of the other industries out there trying to compete with Google. Let's go. Cheers. Hey, folks, we're back. Episode thirty This is uh... Big 3-0. Big 3-0, uh, the daunting one when you're twenty-nine you're about to turn thirty it's a big scary kind of moment, but when you turn thirty and you're like, oh, it's just a number. You know, you're not that old, but we're old now. We are. Yeah, I'm very old. But you know, we're getting old. We're thirty and new location. Happy birth to us. We're in a new, we're in a new area. Happy thirty Dirty thirty Actually, it's, it's a clean thirty because like there's, we're in a new venue. Thanks to um, FEX Global for providing us with this space, this massive TV. That you will see in some of the b-roll and stuff in the background and some of the other shots. But yeah, just amazing, right? We've got new tech, we've got lights. It looks a bit more profesh. We're not in like a room recording a podcast with a beautiful screen. Like, I need to do my makeup or something like that. But, you know, it does not matter because AI is going to take care of that kind of cool stuff because Google is just doing everything. Like, I'm waiting for like Google just to help you get out of bed or Google, um, teleport. All these cool things that Google is just doing. We're going to dive into that, into this episode, but like Chris, what have you been up to over the last couple of days? Well, yeah, well, I mean, Days, let's go weeks. I mean, uh, we've, I've been progressing a lot with the business that we're building out for the AI tool, very much heavy into the MVP state, trying to validate a lot of things and, um, test rolling it out with different services, pretty pumped in from that being vague intentionally. I don't like it. I don't want to talk. It could become nothing. It could not work. But, you know, we're pretty excited about moving in a forward direction. It's like very consult speak. That's it. Very consult speak moving forward. I don't like it. Um, uh, masterclass, doing a lot of work in that space, prepping things off. We've got a lot of some registrations over the last couple of weeks, preambles. Check the links below. Check the links below. We'll add that in. And then on top of that, just working, doing a lot of gigs with some customers, been doing some, uh, a lot of research utilizing AI tools. Um, and I've talked- Deep search kind of stuff? Deep search stuff, um, using, uh, AI tools to conduct the research. I've talked a lot about HeyJuna before. Yeah. And they've made a lot of significant updates over the last couple of months. And we've been able to trial that with a new customer of ours, um, the Smith family. So not for profit, which is great. So good cause. And that's been really exciting because they haven't, they're a business that hasn't touched a lot of AI, um, as well. And so showing off these new things that we can do in a space that just... Mind blown. Um, everyone's really excited because you can see a lot of the interviews even happening in real time with the AI. It's just, it's quite a funny thing to watch. Not a human engaging with these, you know, good questions and getting some good output as well. So that's been my, my week, but yours. A lot of great stuff as well. I mean, there's the podcast, uh, there's another one that we're doing here, folks, uh, which is more on the blockchain side of things. So stay tuned for that one, um, to do effects global. So, you know, give, giving a few things away there, but, um, far from that. And I'm behind the camera for that. That one not in front of us is a different experience. We'll have to get you on. We'll have to train you up on the blockchain stuff. You'll have some osmosis because you've listened to all the stuff that Arturo and I say, but um, I've been doing something with Redbelly Network, which is on their blockchain. Um, uh, podcast is called insights. And so, you know, there, there's some really cool stuff out there as well, but, um, on the AI side, meeting with clients and. You know, the thing that I'm seeing in a trend, and for folks that don't know, like, every Friday I write something called Founder's Journey, and it's like, here's some things I've been up to, here's like, it's just a personal kind of newsletter, but then it has also things in there like, Um, some lessons or like, uh, some poignant kind of thought provoking stuff that I've found and one of the things, um, through work and you said you're doing stuff with the Smith family. Through the clients that I've been having in different industries, one thing that's been a trend is more of them are coming back and knocking on our doors because I think what's happened is that they've seen that trap Chatbots are great, but it's a lot of hard work. You have to go back and forth with it to steer it to get it to the right thing. It's magical. It's way better than we had before. AI came out, but it's still, it feels like a lot of work. And then on the other side, agents and agents are great, but oftentimes seeing results to where it just goes way off track or you don't, it's a black box and you don't know how it got there. And just people wanting to have the middle ground. And what that means is like, okay, it's agentic behavior because it's automated, but it's controlled, highly controlled by you. So we're seeing much more of that agentic workflows kind of stuff that we built in psych many moons ago. So it's great. And we're going to be touching a bit on, on agents, uh, and agentics, the difference between the two and what, uh, a bit later. So that's a, it's a good little. Good little segue. Good little segue to first the things that we're going to talk about. Um, should we dive into this? Because there's been a big. Have you heard of Google? Have you heard of Google? Yeah, well, let's not kick off with Google because I think that's gonna, that's one of the big things that we paid up a lot of our time. Um, but wow, nevertheless, so get excited for that point. A lot of stuff there. Uh, but a lot of other big things, obviously everyone's now trying to scramble around how do we stay ahead in the news and the media, given all the stuff that's happening in Google. Um, so quite a few things that have been launching over, over the, over this week, particularly with the big players of this space. Um, I'll kick, let's start with OpenAI. They've over the last couple of weeks launched, uh, something called Codex. Codex. Codex, it's been talked about in a while, but it's been a lot in beta and actually even the full version Codex one is still in beta for a lot of people. But they've got a new model out there trained in the reinforcement learning, uh, which is all to do with, um, you know, if those who don't know what reinforcement learning is, it's actually teaching the AI by interacting, having an interactive will environments and then, you know, saying, Hey, you were good, I'm gonna give you a reward, well done. Oh no, this was bad, step back. So it's actually, you know, teaching it with real people, with real insights and real information. Yeah, RHF, reinforcement learning with human feedback. Exactly right, exactly right. Um, and so this is a, you know, them trying to get ahead in the coding world of things. Um, you know, putting the pressure on the likes of Claude, but we'll put that. We'll talk about them in a moment. Um, and how it essentially works is there, you know, this can actually plug directly into your GitHub repositories or your repos. Um, and it can interact with the code in real time. It was an attempt to not destroy, uh, the code that you may have, um, that you may be doing. You know, things like both and those other tools, you plug that in, you utilize those tools and every now and then it Reconstructs all of your code. You have to have all these workarounds. Sometimes if you're doing something important, you may have to back up the file that it's, uh, editing or, you know, What is often the case and I think better practice is breaking down a big file, let's say 700,000 lines of code into sub components because, you know, you just run the risk of like. Exactly. Errors and overriding, but you get less of that when it comes to, uh, yeah, codex it seems. Yeah, it's pretty good. So codex one, I mean, there's a few things if you compare it to some of the other models that it has, um, you can see on the, on the article here, which is scrolled down to, You create it to like one high, four mini, three high, um, Codex does outperform them. My marginal from the three but significantly more than one high, which is a much older model now at this particular point. So its accuracy in terms of how it codes and develops stuff is much more superior. That said, they haven't really put any comparisons here to the other models that are out there, particularly because they're so new in the market. But I doubt it'll be very long before other people will be starting to do some comparisons in terms of, you know, quality, et cetera. But, you know, quite exciting, especially if you're an open user, you're a business out there that utilizes this. You can now use their API to really create some cool stuff. I feel like with, you know, you said new there, like if people ask you like one week to another, like what, what's happening in AI? Oh, it's new again. It is. It's just new again. And it's just a tile of like every, you know, different years or different months and stuff and just the same characters going, it's new again. It's new again. It's like everything that came out yesterday is old. That's, that's the, that's the rule of thought. Oh, it's crazy. Um, sticking with the open AI and actually this is like linking into a couple of conversations that have come up from a Google standpoint. Obviously wearables have been a big point of topic for a lot of organizations. Google's got their wearables, which we talked about. You've got paid businesses like Meta with their wearables and then you've got all the device manufacturers, Apple, Samsung, who have been integrating AI into their wearables. Um, so there was a, uh, actually it was a, I think it started off as a, um, a leak. Of conversation that Mr. Sam, um, had put out to his business internally talking about, you know, where is our business going? It's not going to be, where are we going? He's like, where is our business going? And then you can see some executives and fire level executives, oh, he really emphasized where. He really, oh, once again, um, and the, uh, The question was, are we going to be doing anything in the wearables? And he was like, no, we're not going to be doing anything in the wearable space. Um, we're going to be doing things around tools and services that are going to be sitting on your desk and talking and engaging with you in a way that you don't even realize it's happening. Yeah. And so he then went on to a, um, a talk, an interview and, and elaborated on that because it did become a leak. Um, so that's an interesting thing. So they're very much firm in the, I'm not going to, we're not going to be doing the wearables, but I think is the right move to be honest. They're, I mean, they're fundamentally a software company. Don't spread yourself in like focus on the part of where your strengths are. And, um, so they're going home with that. Maybe this is later in the news. There's a, there was, um, an acquisition that they made. Did you see that where. Who's the guy? Do you know the guy that, uh, helped the, the big designer? Very well-known designer, Johnny Ive. Um, he has a company called IO. Yeah. And, uh, yeah, there was a post on that if I can find it. Um, and Johnny was on juice. Yeah. So they, they came out with a big partnership. What a photo, by the way. Can you pop, uh, we won't be able to show it up there if you pop it up. Yep, I'll find it. Uh, that is on OpenAI. OpenAI, Johnny Ive. Oh, yeah. There we go. First article right there. Sam and Johnny. What is it? Push it off the screen, Sam, or whatever? Anyway, so, um, yeah, so computers are thinking and seeing and understanding now, and so they recently purchased- Yeah, they brought in the I.O. team to focus on developing products that inspire and empower and enable and will now merge with OpenAI to work more You know, intimately with research and engineering and product teams, uh, based out in San Fran. Um, an interesting, an interesting merger is kind of pushing into that. Agency development world of things, but more into that creative space, really enhance it and make it more relevant for organizations like, um, you know, business work that we deal with. And this is the photo, which it kind of looks like, uh, I want to go to the wedding. Is it a little generated or do you think it's real? I don't know. I would love to go to the wedding. I mean, I know Sam's already married to, yeah, so. It's quite a romantic photo, isn't it? It is, but it's all good. It's all good in the hood. Um, yeah. So, so really interesting stuff there from open air, but there's other news as well before we get to the big Google. Ooh, so Manus has done an update recently. They have, uh, they're putting a lot of pressure on the image generation world. I mean, OpenAI's image generation has been. One of the creme de la cremes in recent times. You also got things like Leonardo and other tools which also have their flow states. Um, and Manus has integrated a sort of an agentic approach Yeah, genetic approach to how they generate images, which is actually really, really cool. Um, it's still in beta, so not everyone can have access. So essentially what it does, it analyzes the intent that you have with the image that you are asking for. So you say, I want to create a picture that is of a living room space. I'm trying to look for ideas around, you know, to how I want to style it. And it will go, okay, what is, what is the strategy and the approach that we should be taking? It'll structure that and go, okay, what are some of the layouts and examples of layouts that you may like? It'll bring up those examples and you can then question those layouts. And then am I go, okay, what is the style guide that you want to purchase? What are the colors that you like? And it interrogates and queries you and then using a whole bunch of other third party tools then starts to generate these images for you based on that query. So it's It's not just the AI tool doing its work in the background, which probably still uses agentic methodologies regardless, but it's like an active engagement in that, um, that flow of questioning how you want the image to come out. Um, so very, very cool. They kind of, the idea is it creates a much more high quality output. It is, you know, as we know, Manus is using a lot of third party and open source tools and paid source tools. To, to build their, a lot of their, the output and get the output that you're going to get to. And this is no different in that space, but it does mean you probably can, can get a higher quality solution yet, but still very much in beta. I haven't been on a test yet, but I found that very interesting. Fantastic. What's next? Um, LM Arena. Ooh. Unpacked $100 million. Um, yeah, they just, they, a hundred million dollar funding round of their, uh, sorry, sorry, what was it? It's like the. Which evaluates them at 600 million dollars. The Roman Coliseum. Yeah. The arena where, um, AI gladiators come to fight each other to the death. I thought I found it very interesting because they are essentially just a platform that is all about benchmarking and comparison of different AI tools and to unlock a hundred million dollar seed run. That's, that's insane. It is. You know what's interesting there as well is that, like, because there's other, like, benchmark and, like, kind of, um, tools that do the analysis. Whenever there's a new model, um, one of the best ones, and a lot of YouTubers, actually, like, some really interesting YouTubers, like, I watch, um, Theo, he's a great, like, programmer. There's the prime time guy that's also, like, a great programmer and they touch on AI and stuff. And even those guys were mentioning this Aussie company called Artificial Analysis. Uh, are in that kind of LM arena space. And why I'm saying that is because if they're valued at the high level that they are, what does it mean for other companies that are in that space? So it's really interesting stuff out there. Aussie innovation is going off. Folks, um, you definitely need to go to like Gen AI Labs for their events. They had 200 people, one of their last ones. Um, Build Club doing some crazy stuff. DSAI, we know those guys. It really, um, the benchmarks, I'm surprised, but also not surprised because these are really important tools and given, um, the importances that more businesses are going to face on being able to trust the AI models, they're going to look to stuff like this. Yeah, exactly. And so what they don't answer here with in terms of how they were able to get this seed funding is what that is going to be invested into developing. So it'd be intriguing to know what they're going to be doing over the next couple, you know, months with this initial stuff. Hopefully some of the stuff will come out. Potentially. Yeah. It could be the next where you look at your AI system and it tells you. Yeah. I'm sure there's going to be some kind of like probably AI tools to interrogate AI tools. Like it's the, if you're a circular economy with inside AI is just going to start occurring. You know, a company that's doing that language model that, you know, Anthropic or Google or whatever it is and then they'll have a QR code on the back of their shirt and when you scan it, it comes up with a benchmark. Yeah, they'll be waiting in a coffee line and stuff and people just go, oh yeah, this guy's legit. Oh, that's good. Um, some other quickfire stuff, Meta is launching a startup program. They're really trying to push their usage of their tool. So they're focusing on how can we get Startups utilizing our open source platform. So they've launched a program with that. Definitely check that out. I went over to too much now. Um, before we get into some more interesting things, I wanted to touch on the agent stuff that you've also mentioned here. Yes. There was an interesting article. Yeah. Well, there first, there's a bit of an article here and this kind of also leads into exactly that point there, um, around agents and who is responsible. For, I guess, what it means, what the output of these AI tools and these agents. So as consultants, as businesses, as uh, um. People. Yeah, as people, as businesses. A lot of them are going into these organizations, even organizations within themselves and building these agent flows. And the outcomes are, while mostly accurate, sometimes it does make a mistake and even mistakes from a customer standpoint as well so the final user who's out there in the world doing certain things utilizing these tools in the real market sense And there are, you know, things are coming up that there are errors, there are things that are going wrong, always making an incorrect decision. And so now it's like, all right, who is at fault? Is it the businesses who own the AI tools? Is it the people who are building the agents? Or is it, you know, uh, is it acceptable that you as a consumer when you're using these tools should understand its limitations and therefore expect every now and then it's going to get it wrong. And so where does the responsibility lie in that workflow around who is wrong and who is right? Um, you know, I think there's a mix of like educational kind of stuff there, but like certainly the, the providers of products shouldn't just be able to. Uh, wave a wand, wave a hand off and just go, well, you know, you should just know like even cigarettes, the packs have the warnings on there. And I'm not saying that AI has to have Warnings of like, hey, Chris, here's Chris. He lost his teeth and his house and his fingers because he used AR. No, um, certainly that's not going to happen yet. I mean, you know, I'm telling you your future is not going to be like that, but like. I hope so. Maybe. There's got to be some level of responsibility is what I'm saying. Agreed. You know, it can't just be, um, waved off. So. As it comes to the main things that we talk about is like the human element of it all is, is always, it should always be important with any kind of. AI usage anyway, always check the stuff that you're doing. Um, but at what point does over checking will have to be constantly, are we putting now having to put as much effort into reviewing as if we did it ourselves? So where does the line get drawn between the two? And that this is what's raising the legal question at the moment around the responsibility of use of these tools, particularly from an organization, internal use, I can kind of, it's a pretty. I think it's a pretty simple thing to solve. I think where the complexity lies is once you get into the consumer environment. Um, and you know, it's the, the general customers who are jumping onto tools like open AI or, you know, they're, I mean, a plant platform like Manus, um, And you're utilizing all these things to get the outcomes and then suddenly you go, okay, cool. That's accurate. I'm going to do that thing when actually, oh, that was wrong. You just put chemicals inside your fruit. You just injected, um, you just sprayed, uh, disinfectant inside your body. Oh, that was rough. I mean, that sounds like an AI made it up, but look. That happens. That is definitely something that's happened. But you know what's interesting and stuff is speaking of like this, this kind of, I think it's like a middle ground, right? Like there's, um, there's fear mongers on the one side saying AI is all bad. There's people that overhype it saying, Hey, it does everything. And it's like, it's great and just trust it. And it's like, no, that. You know, whether it's humans in the loop or it's the limitations that AI tools kind of have, like, it's important to understand these things. And here's the thing, there are ways that you can Oh, it's hard to check AI, you know, and its results. It's like, look at what ChatGPT did. They made it so that you can click on the links to verify it. Exactly. Look at what other tools like we do in psych. If you're using documents, it shows you where in the documents the results have come from so that you can verify the answers. The importance is that if you're building in this space, how do you make those workflows easy? Because it shouldn't be an excuse like, oh, it's too hard. Breaks and changes and transforms and makes more efficient how you do things. So to deny and go, oh, it's just as hard. It's because you don't have the right kind of workflows or the right tool. So chat with us because Chris and I know that the right tools out there, um, that can help your business. So anyway. I guess, and on that note around whose, whose response, where does the responsibility lie? There's an interesting article that just came out in Taiwan and China, uh, are actually, there's a huge rise in people utilizing these AI chatbots, uh, for cheaper and easier therapy. Yep. So again, where does the responsibility, where does the responsibility lie? We're now where, you know, treating people's literally their, their livelihoods, um, and how they're feeling their emotional impact. Driven by their conversations with AI. That's, it's a scary, that's a scary thing, I think. I think, um, if you're, I, I can see it helping. Um, the, the problem is, is like, well, like there's that kid who was character AI where the kid, you know, Um, unfortunately removed himself from society, uh, because he was talking to a character that he thought was his girlfriend and treated as such and it encouraged him to meet her and he took, interpreted that to going, well, I have to kill myself. So just being very, very careful with that kind of stuff there. People that may be in a really fragile state, how tools like this get used, which is why it's like, That whole professional side of things, like when we talk about how AI would be used in a general practice, like, you know, the healthcare space, it's like, okay. Not to provide clinical results or not to summarize or give advice. But, you know, general stuff like summarizing this might be okay if it's already being provided by a GP. But having a professional that can... Overview the output and agree that yes, this is okay because in future, I don't mind that my doctor, financial advisor, lawyer, they use AI. I know it's going to make things more efficient. I just care that they read the output. So that kind of stuff will play into this, but it's interesting because like mental health stuff is really big, especially in slavery nowadays. A lot of founders need this, so no doubt a majority of those, well majority, a lot of that might be founders. And what people are saying is in some way the chatbot does help us, it's accessible, especially when ethnic Chinese tend to, um, tend to suppress and downplay their feelings. Yeah. It talks to people from a certain generation, so Gen Zs, and they're more willing to talk about their problems and difficulties with these tools than they are with individuals. Um, or with people and especially in China where that is a really big, you know, confronting people to having those conversations in front and opening up on your emotions, particularly with elders. It's a very, very, you know, it's a bit of a no-no thing in their culture. So these tools allow them to be able to open up in that manner. But at the same time, who's controlling it? Who's getting the responses? Who is checking to make sure that these people are actually okay? Are there any triggers if the person is like going, oh, you know, I, I think I'm gonna, I'm gonna do something. What is the idea is like, does it just go? No, don't do that. It's like contacts emergency service. Exactly. Right. Exactly. Right. Yeah. Interesting stuff. And so just because I mentioned agents before, let's, uh, I don't know, maybe you want to discuss on that. The difference between agents and adjantic, um, I guess terminology, because it is, there is a difference to them. Absolutely, I mean, there's, there's already, this is not new, folks, like, most of all, just, just clarifying there, but there's been a whole lot of this kind of stuff, um, out there already, but it's great to see when people do You know, continue to share that. And I say this because like there, there are multiple of these waves. Like when, when ChatGPT, like with Codex, they're doing this stuff with code. And before that it was like Bolt and Lovable and others and WindSurf. And then there was Cursor. Doing its stuff around like coding. There's multiple ways whenever a new company does stuff. Um, Canva, you know, for example, like they open up a whole world to people that want to build like tools and stuff because. Now they've got like Make and some other things that, you know, people, oh, Figma, sorry, are going to try out, um, in terms of making tools. This is just another reminder and we're going to have this continuously because more and more people are starting to come in, but agents. so what are ai agents versus agentic ai so single llm powered system um executes one task at a time uses tools like apis plugins it might use mcp Model context protocol, which is the language that they use to actually get it to do things like in Xero. Um, they might chain prompts together to plan steps and they operate within this very narrow scope. Um, so for example, a travel planner, the books, flights, and hotels, that's an agent. Agentic AI, um, this guy's saying that it's a system of multiple agents and, uh, I will Add some flavor to that, but like each agent has a role. There's a planner, there's a retriever, there's an editor, there's all these things and it coordinates by memory messaging. decomposes and reassembles goals and then it adapts dynamically to failure or change so that's more of a system but also i would like to add that agentic ai i think you should also include the agentic workflows where There's maybe less that the agents can just do autonomously once they set off. They still are singular in terms of their task, but Why agentic workflows are really good is because being less of a black box, you get to see all the controls. Imagine an agentic workflow with 200 steps and you can see something's wrong with like this step. You can pinpoint it. And instead of having to redo every single agent in the daisy chain, you actually just get one of those agents just to fix their part. That's the kind of thing you could do. What are some good examples? Like, uh, so authentic workflows I deal a lot with, so I can give examples of that. So an AI agent, let's like, um, what's an example of that? Um, like for example, like for grants and tenders, where for example, people are, um, having multiple steps that they would normally go through as You know, as staff, uh, to A, read the tender, B, review previous tenders that you've got to answer, um, this particular question. Thirdly, you'd have like some sort of overview and making sure the language and the tone and it's incorporating the mission and vision of the company. Another one might be for research, you bring in new research documents, you're in healthcare, you're looking at mental health statistics, and then you've got your own priorities as a hospital, etc. having a first agent do the research the second agent actually align that to your priorities the third agent may be writing an internal memo it's multiples of these things but if you can imagine like Practically any organization you walk into where there's multiple steps that a person does, if it's to do with knowledge and reading and analyzing and interpreting things and producing some output in words, Um, agents can do that. You know, just very basic agents, mind you. And so, Agentic AI, an example of that? Uh, agentic AI would be, um, basically, you know, having in this case, in this example for this guy here, like this is agentic AI, what I just described. That's, I would call that like agentic workflows. I'd classify that under agentic AI as well. The first one was just agents standalone, just one agent doing multiple kinds of things. What I just described there is in that agentic AI family where what he's saying is that agentic AI is multiple agents like an orchestra. I'm saying that, um, even then you might have something where it's agentic workflows where they've got, they, they do less on their own, but you actually have more control. So, for me, it's what I just described there that is agentic workflows, that's part of agentic AI. Yeah, I would say the, the, the key difference there, so the difference between the workflow and just the general agentic AI is the fact that the agentic AI in its fundamental form Shouldn't need as much prompting as a workflow. Yeah. So a workflow, like NNN is a great example of a, of a really good workflow builder where you can Bring in a whole bunch of APIs and different tools and tell them to go through a step-by-step process. Yeah. And you can create branches and do different things. Yeah. It is an engaged system that at some point will need your interaction. Whereas agentic will tend to just pure agentic AI will make those decisions usually by itself. Yeah, and then you trust it that it's going to come up with something. And sure, it might show you, yeah, it might show you like the thing, it might get you to the right result. Oftentimes it will, but like there's some workflows where you don't want to just roll the dice, even with these multi agents, you've got a set way of doing things. So I would say that agentic workflows, when you have that set, Process driven way of doing things and you don't want it to change and you want reliable outputs and agentics where, okay, we can explore a little bit more. I want to see what the, the multi agents can do. So a good example is Manus is actually an agentic AI tool. So it makes it internally makes the decision around which chain or path to go down to get to your outcome. That you want to do. So for example, going back to the image one we talked about before, it makes the decision, okay, I'm going to use this tool with this tool or this tool to do this. Whereas a workflow, you have more of a structured pathway to get to something. I'm using this tool, this tool, and this tool. You've given me three options here, but I'm going to make the best option based on what you have told me and your requirements. So it's like stepping through a pathway that you have Kind of informed it to go on. Um, whereas a genetic AI will make those decisions on those pathway choices. Yeah. Fundamentally like within reason. So that's cool. I love that. Like, that's a very clear definition because I too, I hear a lot of people throwing these languages, well, this language around and it's like, ah, it's not kind of what it is. So, um, that was a good little side story there. Should we go? Hammer me home with some of your, some of your points here that you've got. I just want to jump to Google, man. It's too exciting. There's nothing really else to really show. Like there's, there's great news out there, but I think we're, we, we are going too far into the episode without talking about Google. Shall we? There is one, there is one thing I want you to talk about before we do Google, which is anthropic. Okay, so it was 3.7, it was 3.8, oh no. No, it wasn't 3.8, we thought it was gonna be. Oh my god, what have we jumped to? Four! Oh my gosh, okay. One, two, no, four. This is how we count. Um, they, they've jumped to, uh, Claude four Some really interesting stuff that they've got there. They've introduced this model with, uh, a couple of things here on screen. Extended thinking with tools. That's in beta. Um, there's new model capabilities, um, they can follow instructions more precisely. I watched the video just on it because it was just released overnight and this morning. Um, and great to like a lot of the AI folks out there that just jump on it straight away. Um, unfortunately they're just going to be very busy for the rest of their lives if they want to do this. Uh, but basically it, it can highlight and if it sees that you're trying to get it to do something wrong, Instead of just doing it, it'll say something like, uh, I see that you're trying to tell me that, you know, to, um, you're trying to force me to like answer that, uh, this question in this way, but even though I know it's wrong, I'll still give you something, you know? It actually says stuff like that because it recognizes that, hey, you're trying to be a bit naughty here or something. So, um, you know, the new API capabilities that are out there, especially for the MCP stuff. So model context protocol, which a lot of others are using. Came from these guys at Anthropic. New coding. Um, I'm sure there's going to be a lot of testing out there. Uh, so yeah, going to keep an eye out on that, but it's just come out. And so it's just the announcement. Um, but yeah. Claude, uh, they've, they've got some, um, SWE benchmark tests and, uh, terminal bench as well. Sustained performance on long running tasks that require efforts and thousands of steps. So, you know, being able to just keep that consistency with how it does things and not have, uh, There's a couple of problems when it comes to like really large things in AI where it's a loss in the middle problem. Like if you've got so much context out there, it'll remember the start or remember the end, but the middle part starts to forget or the needle in the haystack kind of thing. Having something that's really weird out of line with the rest of it, you know, you put in some technology related thing as part of like the Bible, for example, something that really stands out. And, um, if it wasn't able to pick that up. Those are going to be problems because you'll need that for certain kind of work that you're doing. So, um, improvements there in that kind of stuff. I want to really try out the coding because it's used in Bolt. It's used in all these other tools. I wonder if they're pushing, does the ball gonna get it straight away or do they have some kind of. Usually with some of these companies, they have some great arrangement there. Um, because you don't pay, you pay Bolt, you don't pay Claude. Exactly. Um, you pay Lovable, you don't pay Claude. Um, play, you pay... Uh, but you know, it's, it's really interesting. Uh, we will no doubt start seeing that coming out in some of these tools, probably after some testing, um, I wouldn't rush it out. Yeah. Um, hopefully, you know, it's, it's crazy because like with Bolt, they just give you the latest model and it's great, but you don't get the chance to choose unless you go to, um, the DIY version because Bolt was good in actually making their code open source. Um, so people have built forks, they've built other versions of it that don't have all the great things that you get with a paid subscription, but um, still quite interesting. So you're paying for other subscriptions that way. But that's it, that's a really good point. And I think that's in a, it, you know, it's a timely thing that they've launched this out. I'm probably a, you know, a bit of a reaction knee jerk because of what we're about to talk about. God. As you said, which is a good. Oh, the big market leader in all things tech, Mr. Googly Google. This is Google. That's Google. This desk is Google. Your Google, your Google. I know we talked about a lot of news just now and all the stuff that's been launching, but that stuff has been hard to find because of everything that these guys are. You type in AI into the news, it's just like, what do you mean? Yeah, it's just Google. This is all about. So they obviously had their I.O. earlier this week, um, and had, I, you, there's not even enough fingers in the world to talk about how many things that they talked, announced, does that? It's over an hour long. Um, and pretty much everyone out there who wants to talk about what's happened had fifteen to twenty minute long videos. What was your favorite part? Cause there's so many things that are out there, but what stood out to you? Uh, I think the firstly is the, the updates to Gemini if just from a consumer standpoint, um, and you know, for, from a commerce perspective, from a, just a general app and user perspective on your phone. So that there's a lot of updates there we'll talk about. Um, and a lot of the cool stuff that they're bringing to, you know, accessible for businesses and organizations when it comes to coding and, um, image generation, video generation, all that type of stuff is, I thought was very, very cool. So let's go through this. Yep. Um, and we'll, you know, start at the top from the left there. We'll start with the coding side of things. So there was a huge update to Google 2.5 flash, 2.5 pro. A new API that, um, uh, they're rolling that out, uh, which is impacting or which is a part of their new coding system jewels. Yeah. And that is, I think currently in beta, you still have to ask for a request, but I, um, I jumped in earlier today and I was able to have a look in. I think it may be just cause we have a business account with it. Okay. Um, but they have a huge array of new coding assistance tools, different language models that you can utilize. Um, a very, very powerful way to start building web applications. Specifically, I will use applications because we're building. Um, I find it still a bit of a hit and miss. Yeah. Similar thing was bolt and other tools, but, uh, yeah, so you can build some really interesting web applications by vibe coding, um, and. You know, with the launch of Claude four as we just talked about, and your things like Codex and other things, I'm really intrigued to see how this model is going to compare. They released a lot of stats saying how it's awesome, it's amazing, it's up there with all the big players, but obviously as a knee-jerk reaction, all the big players suddenly went, oh, we've got a new model as well. So it's gonna be, there's gonna be something, let's see what the balancing act is gonna occur there. Um, so that's the first thing. What are some of your favorite things? My favorite was the video. Oh, this is V-O? Yeah, this is V-O. The video, V-O, maybe they called it that, but like, here's an example I'll just play. Uh, sound on. Let's see some opinions. This is not a real guy. This is not a real person. And that is... This is not a real person. And this was the most of the craziness of like, of the four odd videos that they showed. Yeah, the car show. Can you imagine, we'll imagine like a... This ocean, it's a force. You can kind of tell with some of the stuff it's a bit like wonky. Yeah, yeah. But like, the point that they're able to just like get, you know, be able to describe and Not only that, like adding in what you normally would have had to do separately in post adding the words and like syncing up the sounds and stuff. No, it just does it. It is freaking crazy. Like, we were discussing before. Like, maybe this is fake. Like, what you're seeing on screen here. This whole I.O. was not real. Yeah. It was all done out. It's not real and stuff. Like, the next AusDeFi or the next, like, um, AI events and stuff. We might not be real in the next one, folks. So just stay tuned. We'll see what the heck we get to do with this. Some of the cool things they launched with the VO3. I mean, so VO2 is still accessible to the general public. VO3, you have to have the Mac subscription and we'll talk about that in a moment. But they have video referencing as part of it as well. So you can upload a video content piece or an image and then go create something off of the back of this. Or add things to this video. Um, so a lot of like context creation, which is really, really fun. Um, Lyria two which is their audio engine, uh, for creating not just the general sound effects, but like actual orchestra. Um, and like music that is really, really rich. He's an example. And so obviously you heard, but you saw with VO3 was adding in the, the audio cues and having people talk and engage and have the sound effects, but also the background sound effects. So putting in, um, you know, that, that, that, Here, this is a cooking video where they just, they literally, they literally put in the background for like a cooking video. What the heck? I can make steak videos. It sounds like a steak video, folks. It's just, it's amazing. And then the very final one, which I love, and this is what you've seen things like Leonardo do with their image generation tools. Is a flow state as well. Okay, what does that mean? You will create, when you create your video, it might give you a few options of, is this the kind of thing you're looking for? And you go, yes, this is the style. And you end up Pivoting in closer and closer and closer into the actual video you want. So super powerful, obviously very, very expensive to be able to utilize this, but it is some, um, you know, for, for content creators who. Live and breathe this stuff for people who are in video production. Like this is just going to be an insane way to create some quick content. Like B-roll in general, like not having to go out and film B-roll. You could just create it and generate it. And it looks as real as it does. Um, interesting to see what the knee joke reaction is going to be from business like open air with Soar and things like that to, um. Are they gonna be just so slow at getting something new out there that Chinese and Google and others just beat them so that when they do put something out it's just like, eh. Yeah, exactly. The Sora was great when it first came out, like when it first was shown and then it like, however many months later, it's probably only weeks, but it felt like months. It did, didn't it? Um, yeah. So we'll see, we'll see how quickly they do. It was a surprise there. It was a surprise and a good surprise though. Um, Gemini had a lot of updates, just the general consumer application side of things. So camera, um, it can now access your camera and do a lot of screen sharing, which is free in the Gemini app. So anyone can use it who has access to the Gemini app. Um, they've also integrated the imagine for engine. So this allows you to, um, actually start creating images and edited images. It was available in beta or in a very like, uh, in certain regions, but that's going to be rolling out now. Although a lot of this stuff is still us only. Um, so that's, that's really great. So you've got to upload images, ask it to change the images in a certain way. Um, and even from a language perspective, so having text on the pictures, which has always been a real issue for a lot of AI generation tools, um, But I did see some people doing some comparisons to this. Oh. So while people were saying, well, the images that were showing up. Yeah. Gemini tended to create much more beautiful, rich. Colorful, like, um, you know, whether it was the, the depth of field or the context within, in the actual shot itself, created better visual images, but it was struggling still on the text side of things like that. A lot of text, whereas OpenAI tended to do really, really well with text on the images, but the quality just quite was a little bit flatter than what Gemini thinks. I can show some examples. Um, Back to the business side of things, any cool stuff that you really liked? Too much. Too much? Like, there's the research side of things with all of their scientific stuff. They've been known for AlphaFold, but now they're doing something called CoScientist as well. Um, and some other kind of like research related things. Uh, I think the interesting thing there is that it's making it specific, um, And, um, more usable by the folks maybe in the labs, in the R and D rooms, in the strategy kind of places within businesses and government departments that. Can now start to use this kind of stuff. So Google's making a massive play there. Yeah. Um, it's, it's going to mean, especially cause I work in that knowledge management kind of space. Like I'm interested in where this kind of stuff could be useful in the tools that we provide. You know, there's a thing around, okay, so Google can do all this kind of stuff. That's great. Um, but it doesn't necessarily mean just because these things are out there and this is hope for startups that People will just be able to only access it through paying for a high level subscription or becoming Google customer. There may be for various reasons, they can't just do that. So, and they, and they needed to. So having a way that people can just stay in their workflows. It's, um, taking these things and putting it into your own tools. And, you know, we're lucky that we can, uh, we're able to do that. So more in the business is the knowledge management things have really stuck out for me. Yeah, including and spanning on that, like some of the stuff around DeepThink, um, uh, you know, having these incredible now AI systems, which I love what you put here, which seems like Java. It's essentially This interactive deep thinking engagement tool now that you have right built into Gemini. Um, but expanding on that with their, they've, I think they've had canvas mode for a while, but that is now integrated in with the deep research tools. And you can now upload your own files into that experience as well. So enhancing that, you know, exactly what you were talking about with the collaboration and that research. Now you can also have your own insights and research helping to, you know, Collate and improve that output that you're searching for based on the things that you want and you know that you have. But what else is out there? Let's bring those together. Um, and I know that a lot of the other tools have had that for a while. Again, we bring it up OpenAI, we bring up Claude, we bring up, you know, even like LM has the ability to do that stuff, but now it's there in the Gemini application and services, which is, which is very cool. Um, The, one of the last big ones, one of the last big ones or two big ones. I think this is just generally to do with, uh, Google search. Um, and they have made a lot of AI integrations into search. It's not just the crazy summary. Not just the crazy summaries anymore. Well, I mean, it still does the crazy summaries I think, but it's, it's a lot, you know, apparently they're better. Um, and so we've got AI mode that is, you know, competing with the likes of perplexity actually. Uh, cause Perplexity had their whole new, um, you know, search models that came out. They were first, one of the first to market with their general search competitive tools within AI. Um, and you were able to start doing a lot of booking and shopping and it was, you know, kind of as we just talked about using, um, agent style, uh, workflows or processes to it. Um, do the things on the web with inside complexity without having to do it yourself. And now AI with Google is going to be doing the very same thing as well. So it can start booking these flights for you. It can purchase some things for you. Obviously it's going to be a steady rollout. Um, and only do it in certain environments or certain with certain applications, but that's going to be coming. Um, pretty cool visual one though, that you, that we've talked about is the try on. Yes. Yes. You'll pull up the video here. Yeah. Awesome. So have a look at this, folks. Have a listen. Ask me to upload a picture, which takes me to my camera roll. I have many pictures here. I'm going to pick one that is full length and a clear view of me. We built a custom image generation model Specifically trained for fashion. Wow. And it's back. The AI model is able to show how this material. How would I look in a dress? So it's currently, it's currently only available in the U.S. Um, but, and I actually saw the, uh, Um, some of the YouTubers talking about this with FUVCO. Yeah. And, uh, you know, Marcus Brantley was, it was showcasing it. Um, but they, now it's obviously globally announced, so it's out there in the real world and But yeah, the ability for you to have a picture of yourself, jump onto the shopping, uh, you know, look for some clothes and then swipe left, left, left, left, right, right, right to check. How the clothes fit you and it understands your body size and it shows what the clothing is going to look like within your, on your body. And it is a very, very cool feature there. Something that I'm surprised hasn't come sooner, but I guess it's a lot more of a complex. Thing to solve than we think. Yep. Um, and I've seen this applied in the real world many, many years ago when you walk into shops or shopping centers and you sit in front of a mirror. Yeah. And you can swipe on a special mirrors to see how the clothing fits you. But they definitely didn't have the, I guess, the AI technology around your size and your fit to be able to make this a commercially viable solution. And Google's done it. This is, uh, this on, on screen here, folks, this is the version. They've just got like all of the different sections highlighted from Google IO. You can dive in more. There's going to be, as you said, Definitely more groups and people that are just diving into specific sections. So check that out. And as we get to try out some of this stuff, we'll no doubt be bringing it back. But what a release, what an update. Masses of update. Crazy stuff. Um, probably going to round out there, folks. What, uh, anything else? Any final kind of things for this week? No. I think this is, this is going to be, I'm hoping this week will be slower just so we can take a breath and absorb all the things that have, uh, you know, that have come out and actually be able to test them. Because as we were talking about before tomorrow, it could just be, it's out of date and something new is going to come up. It's like, I didn't get to test these five things. Yeah. Come on, slow down. It's crazy, folks. Like, uh, stay safe out there. Make sure you get your mental health kind of stuff sorted. Um, as you can see, you know, AI could help with therapy, but just to be careful with it. Um, keep building. I think the lesson is that just don't, Um, stay, uh, doing nothing. Don't just stay watching, like actually get involved because the more that you know, the more that you'll be able to really understand this space. It actually is exciting. Change is hard, but this is exciting change, folks. So I think it's a different kind of lens there. But if you've got any questions, make sure to reach out to us. This is Chris and Mark. I'm Chris? You're Mark? I don't know. Who knows? You know what? It's going to be different when the AI takes over. Yes, they'll be like, I'm Chris and this is your news. See you there. Catch you guys.