Examining the Current Decentralized Storage Landscape to Help AI Fulfill Its Potential in Web3, with Ryan Levy @ DataHaven / Moonbeam (Audio)
Crypto Hipster
486
00:35:5529.51 MB

Examining the Current Decentralized Storage Landscape to Help AI Fulfill Its Potential in Web3, with Ryan Levy @ DataHaven / Moonbeam (Audio)

Ryan Levy, Head of Business Development at DataHaven;Ryan Levy is a seasoned executive with nearly 20 years of startup experience across Web2, Web3, blockchain, and data. A master at “connecting the dots,” Ryan leads business development, partnerships, and go-to-market strategies, building ecosystems across DeFi, Blockchain Networks, Data, RWAs, DePIN, Gaming, and more.
Currently Head of BD & Partnerships at Moonbeam and DataHaven, Ryan previously held leadership roles as VP of Business Development at SKALE Labs (SKALE Network), Head of Protocols & Partnerships at Chainstack, and Head of Partnerships at Kadena.
Born and raised in South Africa, Ryan resided in Australia for many years and now calls California home. He greets each dawn with an espresso and a workout, which sets a tone of clarity and energy for the rest of his day. His guiding mantra, “Never Give Up,” drives his relentless pursuit of success in both his personal and professional life.

[00:00:03] Hello, everybody, and welcome to the Crypto Hipster Podcast. This is your host, Jamil Hasan, the Crypto Hipster, where I interview founders, entrepreneurs, executives, thought leaders, amazing people all over the world of crypto and blockchain. And today I have another amazing guest. I have the head of business development at Moonbeam. His name is Ryan Levy. Ryan, welcome to the show.

[00:00:26] Thank you, Jamil. Great to be here. Happy Friday. Always great to have a good chat with interesting people. So glad to be here. Thank you. You're welcome. And thanks for joining me. This is going to be great. So let's ask you first. I always get amazing answers. What is your background? And is it a logical background for what you're doing now?

[00:00:50] Oh, that's a great question. I don't know if anybody has a logical background for being in the Web3 space. But I think it's probably the ultimate space to be working in right now. Let me share a bit about my background. So I'll start off. Firstly, you probably realize that I'm not born and or raised in San Francisco.

[00:01:11] I'm actually a South African born and raised, spent a lot of my time, about 15 years actually living in Australia and have called California home for quite some time.

[00:01:23] And through that entire journey, my focus has always been around tech. And in particular, which will be interesting to share later on, particular around data, data centers, and essentially everything that revolves around data, whether it's building data storage systems, whether it's data analytics, data warehousing, a little bit of cybersecurity.

[00:01:52] And now what I'll share as well when we talk about it, Moonbeam and our exciting expansion into the Ethereum ecosystem with Data Haven, which is a storage platform too. So when you ask the question around the logical journey, I'm going to say for me, yes. Everything that I've either built or learned in the past, I started off my career as actually a system engineer.

[00:02:17] So actually building out data centers and data center infrastructure and then cross that chasm into the whole business side, you know, the startup side and the whole sales and BD and partnership side.

[00:02:29] And everything I've done or touched or learned during that time translates exceptionally well to this world of Web3, whether we're talking about it from just a blockchain perspective, an infrastructure perspective, an app or DAP perspective, you know, when we talk about the viability and the use for decentralization.

[00:02:50] And then, of course, and then of course now as we're talking more and more about data, AI, AI agents, real world assets, deep end, all this kind of stuff. So my whole journey, I would say yes, has definitely been a logical step into this world. Interesting. Yeah, I built databases for 20 years. Oh, wow. Okay. Yeah. There you go. There you go. Were they structured or unstructured? They were structured. Yeah.

[00:03:19] It was for AUM reporting and stuff like that. Gotcha. So yeah, yeah. That's interesting. I spent a lot of time working on structured and unstructured databases, the likes of Couchbase and working with Hadoop and all these data warehousing. And it was all from the BI side. So the data analytics side. Hadoop's harder than blockchain. That it is. That it is. Yes.

[00:03:47] Oh, it's mostly SQL, but I digress. There you go. I want to find out about Moonbeam. You know, Moonbeam and Data Haven. Yeah. What are they all about? What makes them great? Yeah. Okay. Let's talk about, I'll talk about both. I'll start with Moonbeam. I'll start with Moonbeam. And we'll venture the story into the world of Data Haven because we are at such an exciting time. I'll share some stuff on the world of data.

[00:04:15] I guess by the time this is released will be okay for air. So, Moonbeam.

[00:04:22] So, Moonbeam. We'll be right back.

[00:04:50] But we have a real differentiation in mind, which is a high value add to any projects, both building in the Polkadot ecosystem or building on Moonbeam. And that is that we are a gateway to liquidity. So, we have these partnerships with GMPs or bridges, you know, GMPs in general messaging protocols like Axelar, Layer Zero, Wormhole, Hyperline, you know, name the big ones. We're probably partnered with them.

[00:05:19] And what we essentially do, in fact, I should start with this. We have this amazing term, which is build here, grow anywhere. And that is we have these dApps that will build on top of Moonbeam, but they know that through these GMPs, through these bridges, they have access to users, wallets, liquidity, dApps throughout about 100 other blockchains. Now, that is not the idea behind that is not build on Moonbeam and disappear to some other chain.

[00:05:49] It's build on Moonbeam and interact with users and whatnot seamlessly across the other chains. We have about 250, maybe 275 active dApps, live operating day in, day out, generating transactions and TVL and whatnot for us. And what we did about, I want to say about six months ago, is we really narrowed down our focus. You know, there are a lot of general purpose blockchains out there.

[00:06:18] We don't want to be just a general purpose blockchain. We want to have expertise and subject matter expertise in certain areas. And although we broke it down into four main verticals, being DeFi, Deepin, RWAs or real world assets and gaming, we have found that we have got tremendous traction and success around RWAs and gaming. Makes sense. I get it completely.

[00:06:43] Especially if you look at where the industry is and where it's going, those narratives totally make sense. In fact, the RWA narrative has stretched exceptionally well into LATAM, Brazil in particular, and we're starting to see some great traction in El Salvador and Argentina as well. But I guess in a nutshell, that's who Moonbeam is.

[00:07:07] And we're continuing to grow, build and onboard new dApps pretty much daily. Got it. And then Data Haven. Yeah. All right. Data Haven. So I actually head up BD for both for Moonbeam and Data Haven. What we did also about six months ago was we announced this expansion into the Ethereum ecosystem.

[00:07:33] Not a departure, not a pivot, not a desertment, or if that's even a term, of Moonbeam, but more of this complementary expansion into the Ethereum ecosystem. And what Data Haven actually is, is an AI-first storage platform secured by Eigenlayer. Kind of a mouthful.

[00:07:57] And, you know, they're very short and sweet terms, but there is a lot of power behind what we're actually doing. One, secured by Eigenlayer. So we're the first secure decentralized storage platform to be built as an AVS, which is an autonomous verifiable service. It used to be called, AVS used to be actively validated services, but Shuram and the Eigenlayer team changed that to autonomous verifiable service,

[00:08:26] which resonates exceptionally well with what we're doing because we're so focused on AI, AI agents, AI data. And I'm sure we'll dive a little deeper into that. But if you really want to hear the deep and dirty around what we're building with Data Haven, let me share a little bit more. So we're building a provable, verifiable, tamper-proof, and censorship-resistant storage platform.

[00:08:54] And our first focus out of the gate is on all things AI. It is a general purpose storage platform. You can actually store anything on there. Files, media, web services, web front ends, databases, whatever you want. But where we're targeting the market is really around capturing this world of AI. And there's a number of reasons behind why we're doing this and how we're doing this.

[00:09:22] And I use an analogy that I feel resonates with everyone from whether I'm using an Eli 5, you know, explain like I'm 5, or an Eli 70, explain like I'm 70. Then the explanation tends to resonate. And that is when I was a kid, and I think when most people were kids, we played a game called telephone or broken telephone. Sitting in a circle with 10 mates or 9 of your mates. So there's 10 of you in total.

[00:09:50] First person whispers a message into the next person's ear and continues all the way around. And inevitably, the message never got to the last person. It was modified in some manner. It was changed. Either it was changed just because something was lost in translation, or it was changed because somebody decided, hey, I'm going to make, I'm going to add my flavor to it. So although the core message was right, there were some changed words or components of it.

[00:10:18] Or, which inevitably there's one of these in every group, there is somebody what we would call naughty or nefarious who decided, oh, I'm going to send a message so that that last person has to do something that they definitely don't want to do. And so why I share that as an example is when you look at AI today and these proliferation of AI agents,

[00:10:43] what we're seeing is not only do you have an agent that either leverages a piece of data that already exists or generates a piece of data and then shares that with another agent, the other agent then makes some decisions, then may actually pass that data back to the first one to make additional decisions or iterations. There's one fundamental flaw in that. If you cannot in any way, shape or form, verify, validate,

[00:11:12] prove that that data has not been tampered with, you are absolutely fraught with the opportunity for corruption, nefarious behavior, terrible results. So where we come in is we're building this platform that will ensure verifiability. We can prove that the data has not been tampered with. There is no centralized entity. We are 100% decentralized.

[00:11:40] We encrypt the data, meaning one, the host has to decrypt the data, but two, anybody even operating or using, so the validators or whatever else that are running the network cannot read that data. It's not in plain text or visible in any way, shape or form. And so that's what we're addressing from an AI perspective. And I mean, I am a huge advocate for the use of AI and in particular AI agents,

[00:12:09] which although today I think a lot of these agents are actually task bots, you know, some human has said, go do this. It goes and does that. It comes back. And then you say, go do that. And they go and do that. But as we venture into this world of true autonomous agents, if we cannot prove that that source data is what it should be and hasn't been modified, we are in for some interesting times. And so that's the problem that we're looking to solve.

[00:12:40] Very interesting. I told you before we started that I built that. I mean, I told you at the beginning, I had to build databases for 20 years, right? Correct. Yeah. So I'm trying to put my database creator hat back on after eight years or after a long time, right? Yeah. And I'm like, you need that data to be uniform in order to come into the database and do data quality on it, right? So you have different kinds of AI data. You have, you know, the data is collected by ChatGPT, you know, by DeepSeq.

[00:13:08] And you also have federated learning models that capture data completely differently. Yeah. So how do you make the data uniform so that you can do data quality on it? So we don't. That's actually not where we come into play. What we do is something slightly different, but it's a great question because this is what either the data creators or the agents need to be able to do,

[00:13:35] which is created in a format that they can leverage and use and share and whatnot. But we don't do that component. What we do is we provide these verifiability components and whatnot through a number of techniques, one of which is every piece of data or any content that is stored on Data Haven, we just treat as a standard piece of data, which we then fragment.

[00:14:03] So you want to use the term shard or shatter. It's probably not shard. It's more like a shatter. So we break it up into disparate components and fingerprint that data. And the reason why we do that is when an agent or when an app interacts with the data, it's actually going to check that fingerprint. And when it checks the fingerprint, it can tell instantaneously whether one of those pieces that has been shattered has been modified.

[00:14:31] So we're verifying that the data is correct. It's kind of like taking a puzzle and saying, well, here's the image of the puzzle. And when you go look at the back end of it, there are three pieces that are wrong. They've been somebody's put the wrong pieces in there from another puzzle. And so we're mitigating that issue or that concern by using both this shattering type capability. And then, of course, we use the encryption and whatnot in the background.

[00:15:01] You did raise something, though, which I told you before, I love the ELI 5, ELI 70 kind of components. And you raised ChatGPT, DeepSeek, and whatever else. And I've had some very, very interesting conversations, both with AI developers or data creators as well as internally. And one of the terms that I just absolutely love sharing is that, and I'll share it with you now,

[00:15:28] but it's more than just solving the technical side or the technological side or the infrastructure side per se, because the term goes like this. People tend to overshare but undercare. And if you look at what people are doing with ChatGPT or DeepSeek or any of these kind of, you know, make my life so easy, generate some content for me.

[00:15:55] And by the way, here's my social security number or here's a document from our organization. And they're suddenly uploading IP and, you know, highly sensitive data. I mean, I can't imagine what would happen if somebody in health care had to upload some kind of research paper that hasn't been redacted. So there's two components to what need to be solved.

[00:16:19] One is the data authenticity side, but two, which we can't solve, but I'm hoping the industry can, and that is behavioral changes because we share too freely. It's awful. I mean, I have three teenage kids and they use AI daily. It's how they build content. It's how they validate or learn even.

[00:16:47] But I can tell you, if I didn't tell them that they need to be rather cautious about what they upload, then I wouldn't be surprised. I mean, you know, people today consume their news from social media. They believe what they see. So they're not scared to just share anything. Yeah. I see it on social media, people oversharing and undercaring. And it's in a way that should do a money grab. Yeah. Yes, it is.

[00:17:16] So it's kind of nefarious, you know, to an extent, you know. It really is. Yeah. We don't want it to be nefarious. We want AI to fulfill its potential. Right. So how do we help AI fulfill its potential in Web3? Yeah, that's a very good question. So it comes down to, I'll use those two components again. It comes down to both the behavioral side, so the human side behind it.

[00:17:45] It comes down to the technological side. And it certainly comes down to guardrails. If we don't have, so if you look at blockchain or crypto as a whole, whether we do, whether people do or don't want it, regulation is actually going to help create those guardrails. Where people know what is good to do, what's not good to do. You know, not everybody's a degen. There are the majority of people that will end up using crypto and blockchain

[00:18:14] are far from degens. They're going to use whatever products are prescribed. You know, they're going to use centralized exchanges. They're going to, for some part, listen to what the government is talking about and whatnot. And if you take those regulations or those guardrails and bring those into the world of AI, we can then be a lot more successful in honing in on the use cases, how we use the tools, how we use the data, how we use the agents, why we build the agents.

[00:18:46] And then most importantly for us, when it comes to the data haven side is the storage side of things. We want to ensure that all these decisions being made, whether it's a financial decision. So think of some financial advisor or a trader or a wealth manager or whatever else. They're making decisions on data that they can verify in a timely manner.

[00:19:10] And I mean, even if you look today at oracles, which are the incredible technology that brings external data, so Web2 data onto the chain, which is really complex to do, they have to verify what they're bringing on chain. Remember, the blockchain is designed to be immutable. What you bring on chain stays on chain. So making sure that that whole pipeline from beginning to end, I mean, you know databases exceptionally well.

[00:19:40] There is a data pipeline. It goes from some person out there generates some kind of data. We need to do some kind of ETL or ELT conversion or whatever else to make it readable and usable and whatnot. And then there's an output and people make decisions based on this output. Or like with agents, you have a database talking to another database.

[00:20:02] So we have these agents that are being developed to help save us time, do tasks that are either mundane or costly, but do so in a trusted manner. And so what we've even seen, and I should actually talk about this too, what we hear out there is a lot about these trusted execution environments or T's.

[00:20:26] And the whole concept behind T's is that you can have people work on this data. They can work on, I'll use healthcare as an example. You can do some research. You can work on PII type data and HIPAA compliant data and whatever else. But you never want to reveal that to the external world. You don't even want to reveal it to a competitor.

[00:20:53] Competitor is probably a terrible term to another healthcare institute or whatever else, because that's your secret source. You've spent your time building this analysis, building this research capability. But what you do want to share, because this is critical to share, are the results. And so with these trusted execution environments, you actually have the ability to just present a result without showing the backend working and calculations.

[00:21:21] And that helps also put guards around sensitive data, but allows people to still use the calculations or the, use the word results again. I've probably used that 300 times, of this data analysis to make additional decisions. Interesting. I like what you're doing with data shattering and getting it done to the common denominator. I like that. Yeah.

[00:21:51] I, I, I question the, the application so far. I mean, it's going to work. I can see how it works. Like in health and health data where you're comparing two medicines that are new. One that came recently came off the FDA approval and one that still is in safe, like a phase three. How do you compare the two and the effectiveness and the efficacy? Like that is still an unknown area. And it's going to be, you know, for a while. Totally. Totally.

[00:22:15] But, but it's a good use case that you can't get in centralized storage entities, right? You can't, you know, it's very interesting. We have a incredible healthcare project that is built on Moonbeam. So this actually will, we'll talk to the whole Moonbeam meets Data Haven world because I didn't even share this.

[00:22:39] One of the biggest premises around this expansion, and then I'll get back to my healthcare example, is that what we will see and we're already seeing is that projects building on Moonbeam are already thinking about using Data Haven as a storage platform because they need a fully decentralized storage platform. Now, gaming, potentially not, RWA's, financial data, healthcare data, et cetera, et cetera. Great.

[00:23:09] And in turn, what we're going to see, and I know this both from the wonderful Eigenlayer ecosystem that we're going to live in, but also any projects that will store data on Data Haven, is they in turn have the opportunity to think about, hey, I've got this data repository. I love how it's been stored and protected and whatever else.

[00:23:34] I can also build part of my DAP or move part of my workloads to Moonbeam. So we're creating this kind of cyclical lead gen model where Moonbeam feeds Data Haven, Data Haven feeds Moonbeam and vice versa.

[00:23:50] But going back to your healthcare example, especially around new testing and whatever else, we were talking to this, one of the healthcare, health tech, should I say, projects that is built on Moonbeam. And one of the problems that they've been solving for is the exact question you asked in the beginning, which is how do you normalize this data coming from 3,000 data sources?

[00:24:18] You know, you've got stuff coming from however a surgeon inputs some data. You've got it coming from a pharmacist. You've got it coming from the pharmaceutical company. You've got it going to the pharmacy itself, whether it's CVS or Walgreens or whatever else it is. You've got data coming in from research. You've got data coming in from other healthcare institutions. The problem is twofold. They all come in in different formats. And the export usually requires human intervention.

[00:24:47] And you know for sure when they touch the keyboard, when anybody touches the keyboard, you run the risk of getting another character. And that will completely destroy results. So what these guys are actually working on is normalizing that data before it goes out for analysis or use by external parties. So it's verifying the data.

[00:25:12] It is maintaining it within HIPAA compliance and PII compliance framework and whatever else. But what we were talking to yesterday, and this is why I guess I went full circle with my story, is that they're built on Moonbeam. But Data Haven is going to be the storage repository for them because of all these incredible verifiability and authenticity components that will be in place. Awesome. Awesome. HIPAA compliance.

[00:25:42] That goes into the realm of – I don't want to talk about HIPAA compliance. I mean, it's boring. I got to go to lab work. I got to sign something. That's all I know. I don't care about it. I do care about the greater issue, though, and that falls into the realm of security. Security, privacy, and you have so many centralized storage hacks and breaches and all that stuff. How do we convince companies to, hey, Web2 storage, that doesn't work.

[00:26:12] You know, try it. It doesn't work. You know, move over into our Web3 storage approach, and how do you get them there? Yeah, that's a fantastic question. I do think back to my days of Web2, so centralized storage systems, which are both built and sold. And I was actually sharing this story on the weekend. If you haven't worked out yet, I love telling stories.

[00:26:41] I was telling the story on the weekend that one of the most incredible problems to solve is the concept of synchronous replication. So in the event of any kind of failure in a data center, you can seamlessly roll over to a second data center and continue operations without any downtime, without any loss of data. It's very difficult to do. It's very costly. It's very difficult to do.

[00:27:08] And depending on the type of data, you most probably will run the risk of replicating the corruption because there's not enough time delay. And then there's this advent of asynchronous replication, which puts the security or protection guards in space, in place, which is, okay, we're going to replicate to another data center per se, but there's a time delay.

[00:27:30] Five minutes, 10 minutes, 15 minutes, enough time for you to go to hit the big red button and stop so that you have a working database which doesn't have the corruption. Not so easy to do in a decentralized world. But bear in mind, in a decentralized world, you're not necessarily replicating the data between all of the validators. What you're doing is verifying the data between all of the validators.

[00:27:58] And so when you look at how, in this manner, you're talking about proof of stake with validators where usually it's two-thirds plus one consensus mechanism where two-thirds of the validators need to agree or the nodes need to agree that the result was correct before it's committed as a transaction.

[00:28:23] That's the guardrail that's actually in place, which is, unless there are nefarious actors as these validators, which is why we have slashing in place and whatever else based on their stake, unless you have that, you have a level of protection around writing or replicating a level of corruption.

[00:28:45] But, as I said in the very beginning, when you look at censorship resistance and when you look at immutable blockchains, you do run the risk of people storing data that either should never be stored or recorded on the blockchain. You know, highly sensitive kind of child data or nefarious behavior of people that should never, ever firstly see the light of day nor be stored on a blockchain for perpetuity.

[00:29:12] And that's very hard to protect against because the only way that you can actually do that is to bring in a governing entity that can say, well, that block or that data that's stored there needs to be removed. It can be done. Thankfully, it doesn't often happen.

[00:29:30] When you talk about, however, the hacks or the data breaches and whatever else, we as a Web3 and a blockchain community are unfortunate because one of the things that gets highlighted every single time is any time there is a hack. Usually this happens at the bridge layer, at some kind of communication layer between chains or between networks as such.

[00:29:59] And the reason why it's highlighted so much is usually because of people that are against blockchain and crypto. And they don't want to talk about the fact that, hey, guess how much money laundering and guess how much manipulation of funds and data and whatever else happens in the Web2 world? We just don't hear about it.

[00:30:19] The fact that Web3 is open and transparent and you can see everything and anything that lives on the blockchain from a transaction perspective or at least from a fingerprint perspective is both good and bad. And yourself working in the database space, I'm sure you understand very well about open source and closed source kind of technology or software.

[00:30:44] And so it is very hard to protect against. We do and we plan on doing the greatest job that we can in what we're building, which is protecting the data that lives on the chain itself. This is why we've chosen to build it as an AVS secured through Eigenlayer's restaking protocol.

[00:31:10] We've extended the security of Ethereum mainnet all the way through to our platform, arguably the most secure or one of the most secure blockchains out there. And we extend that into Data Haven. And one piece, this is, I guess, adjacent to the question that you mentioned. I didn't mention this either. And that is Data Haven is built in a very similar manner to Moonbeam, except it'll be secured through restaking.

[00:31:39] And when I say very similar, there's actually an execution layer. And it supports smart contracts. People could actually build whatever they want on Data Haven. We just so happen to be building this with the storage piece in mind and in particular focusing on the AI side. But in reality, you could actually bring some of your other workloads or gaps or even just your smart contract components and deploy them on Data Haven.

[00:32:07] That's not what we're focused on, but it is actually doable. And the whole idea behind this is to be complementary to Moonbeam. We want to extend the offering. We want to extend the capabilities. We want our ecosystem to grow and feel like, you know, we're thinking well beyond the box. And I mean, this is why we've stood up as a completely separate chain.

[00:32:32] And the last piece, I digressed again, but the last piece behind this is we're building a native bridge between the two. So there will be seamless communication that can take place between Moonbeam and Data Haven and vice versa. Yeah. All secured by Eigenlayer's restaking. You got it. Which is very powerful, by the way. It is. It is. Absolutely.

[00:32:59] I mean, I've got to tell you, I love the team over there. The brains behind the people working there. Churam, Nader, a whole bunch of the other who are just phenomenal. And if you keep a close enough finger on the pulse of what they're doing, you know, just you could do it just through crypto Twitter or X, whatever you want to call it. But you can actually see a lot of the thought leadership.

[00:33:23] And it's why we chose to actually do it through Eigenlayer as our restaking protocol. Awesome. Awesome. Yeah. It sounds great to me. I'm looking forward to watching your progress over time and see how things go. Yeah, thank you. If you ever need a database consultant, let me know. I may be able to chime in in two cents. So, you know. I am all about making great connections and sharing where possible. So, I would love to pick your brain sometime.

[00:33:51] I mean, worst case, we go down memory lane of some old school technologies, which I literally did the other night with somebody who had built the same storage systems I did, which were NetApp and EMC way back when. You know, talking 15, 20 years ago. So, it's always fun. Yeah. So, with that said, I want to thank you very much for your time today. I love this conversation. I enjoyed speaking with you. And I have one last question. I like both. How can people find out more information about you? About Moonbeam, about Data Heaven.

[00:34:21] You know, how can they start to use, you know, your protocol platform? How can they do that? Yeah. Awesome. Firstly, thank you. It's been an incredible conversation. Love the chat. Love the questions. I said it to you beforehand. Thinking on the fly is usually the best way to get good information out of people. If you want to find us, moonbeam.network. You can find us at moonbeam.network.network.net.

[00:34:46] You can find us at datahaven.xyz or datahaven.xyz on x. If you want to find myself, it's at Ryan J. Levy. Pretty much across every single form of social or whatnot. And we are excited to share because I know where our witness will be announced, but our light paper is actually being released today.

[00:35:14] It will go live on Tuesday to the greater world, but we've actually released that on our website today. Super exciting time for us. We've been busting our chops as a great team to actually pull this together. And we are looking for additional launch partners. So we have a number of AI projects and data and data storage projects that are already signed up as launch partners.

[00:35:42] So there is a link on the website if you want to sign up as a launch partner too. Awesome. Thank you very much for your time today. Of course. Thank you, Jamil. Appreciate it.

Digital transformation broadcast network

Follow Us on LinkedIn

Follow us on LinkedIn and be part of the conversation!

Powered by