How to Harness the Power of High-End GPUs for Scalable and Efficient AI Model Training , with Jakub Ondrášek @ Clore.AI
Crypto Hipster
433
00:33:4517.44 MB

How to Harness the Power of High-End GPUs for Scalable and Efficient AI Model Training , with Jakub Ondrášek @ Clore.AI

Jakub Ondrášek is the CEO of Clore.AI and boasts an extensive background in developing mining pools for Ethereum and other cryptocurrencies. Having first opened a pool in the Monero ecosystem at 12 years old, Jakub went on to operate a cryptocurrency mining farm in his native Czech Republic during his early teens. After observing a scam from a company posing as a GPU marketplace, Jakub leveraged his technical expertise to create the original prototype for Clore.AI within 2 months, launching in December 2022.

[00:00:00] Hello everybody and welcome to the Crypto Hipster Podcast. This is your host Jamil Hasan, the Crypto Hipster, where I interview founders, entrepreneurs, executives, thought leaders, actually really amazing people all over the world. And today I have another amazing guest and he is the CEO of Clore.AI. His name is Jakub Ondrášek. Jakub, welcome to the show.

[00:00:26] Yeah, thank you. You got my check name somewhat right and yeah I founded Clore.AI. My background is like somewhat being a programmer and going around crypto for some time. I was most like interested in like the backgrounds of mining crypto. I operated historically some mining pools and now I put most of my time into operating Clore.AI.

[00:00:53] Awesome. Well, I'm excited and looking forward to learning all about it, you know, and how do you guys create a mass harvest of dormant computing superpower, which is really cool to me. I'm going to ask you first though, you know, get a little bit more into your background. You know, see you've been crypto for a little while. You know, what is your background and is it logical, you know, for what you're doing now with AI?

[00:01:20] Yeah, like more about like my background. Yeah, I basically got into crypto in 2017. I never like tried to like harvest like something, some playing with DeFi or I never like tried to trade crypto.

[00:01:36] I basically always look at the fundamentals of some projects and always historically looked on some fundamentals. I seen if I do this trade like by the code itself, this will need to happen.

[00:01:48] So yeah, I basically always made my trades and my positions in crypto based on like only the code itself. I never really cared about like some other, some other like technical analysis or something like that.

[00:02:01] And then it now that I found it, it's the most logical thing to me because there is a lot of unutilized GPU resources. Miners are not like always getting the best yield for their machines.

[00:02:16] And it's just, it was somewhat missing thing to just have this like fully open GPU computing marketplace where, yeah, we're basically not setting any board address and we want to make the trade as seamless as possible.

[00:02:31] And like basically allow everything possible with GPU compute and made the most open marketplace in the world.

[00:02:41] All right. So let's, so let's start with that. What is Chlor AI all about? First of all.

[00:02:48] Yeah. Chlor is about, it's the place where you can offer your GPU machines and someone else can run them. It's like, and it's not like starting just here.

[00:03:01] If you're a miner, if you're a miner, for example, you can offer some overclocking profiles of your machine.

[00:03:06] So the clients can bet and select some overclocking profiles.

[00:03:10] Basically, if depends what you're paying for electricity at like the different times, but you can maximize your profits at basically any time for your mining machine.

[00:03:21] That's like from hosting provider site and like from client site, you can like an AI researcher or some developer of some new cryptocurrencies.

[00:03:31] We've seen with too large, too recent launched large coins.

[00:03:38] We have seen, yeah, crap.

[00:03:45] It's like basically right now two biggest coins, basically Cubic and Aleo, their test nets were run on Chlor.

[00:03:54] For example, the use cases, so like blockchain developers are right now heavily using Chlor because it allows just to scale to such extent, like hardly possible.

[00:04:07] In like standardized environments, you, for example, if you wanted to rent some large amounts of GPUs from like some large centralized providers, you need to like sign a contract saying you will rent it for like two years or something.

[00:04:21] With Chlor, this doesn't happen.

[00:04:24] The market is as open providers are putting their own prices that they want.

[00:04:29] It's a market clients accept, deny.

[00:04:32] It just flows all the time and it is reaching the best prices in the world, best rates for everything.

[00:04:42] Okay, so let's, let's, let's, um, you mentioned, mentioned miners, right?

[00:04:48] And they're not, we're not making the yield.

[00:04:50] They want to make, but what you, they, they can.

[00:04:54] So what are the, some of the pain points, the real like current pain points that miners are facing?

[00:05:03] You know, before it used to be China threw everybody out and then the grid costs and Texas went up.

[00:05:10] And so right now where we stand, what are some of their greatest pain points?

[00:05:15] Well, I think to some extent, like, you know, miners right now, we're focusing on like GPU miners, basically with GPU machines.

[00:05:24] We are not like focused on ASIC miners.

[00:05:27] So like China stuff didn't hit that much GPU mining.

[00:05:30] The worst thing that basically happened to GPU mining was Ethereum merge, which is the, yeah, like few years ago.

[00:05:38] So now most of the miners are like somewhat sustaining.

[00:05:41] They are not buying that much new hardware.

[00:05:44] Like they are trying to like pay off the farms.

[00:05:47] They are moving into like low, lower energy cost countries or something like that.

[00:05:52] And otherwise, yeah, they basically want to get, get the most amount of like money for their miners.

[00:06:00] And like recently, like it wasn't, the market wasn't moving that much, but basically how to make some greater money,

[00:06:08] like how to have some edge against the market is going against like speculative coins, basically new coins.

[00:06:15] And like being able to be the first on the market, you maybe don't want to like be necessarily like, because you don't know what will win.

[00:06:24] But people on Claw themselves, our clients may just go on, rent your machine and speculate on something and pay you higher yield basically sooner.

[00:06:36] And you're not risking anything.

[00:06:37] You're receiving the yield.

[00:06:39] And yeah, that's, if you're stay as a mining machine, like there is also like an incentive for you.

[00:06:47] If you upgrade your machine to be like somewhat more reasonable, I'm saying like most miners will not like go on and just upgrade their machines,

[00:06:56] like upgrade the CPUs, RAM storage to be like somewhat more useful for AI.

[00:07:02] But then you can open up like a whole new like world of basically much higher returns on your machine.

[00:07:11] If you like upgrade your CPU, RAM storage.

[00:07:14] So it's like basically like two machine classes.

[00:07:17] One is some like minor class that is used for like testing new proof work crypto projects,

[00:07:24] like just running some low level like simulations.

[00:07:27] And then there's like second machine class, which are some higher end machines,

[00:07:32] which are like people that already had them or miners upgrading them,

[00:07:36] which are heavily used for like large language models, stable diffusion stuff models for generating images on scale.

[00:07:48] And I'm talking too much, right?

[00:07:50] No, you're good.

[00:07:51] I'm beginning to understand.

[00:07:54] So basically these miners would improve, improve their yield.

[00:07:59] You have dormant computing superpower out there and you enable them to tap into that dormant power, right?

[00:08:06] How do it and why should they do it?

[00:08:09] Well, first of all, there is no, no disadvantage of like joining core everything.

[00:08:15] Like I don't want to name, but like some other projects that are trying to achieve GPU marketplace stuff

[00:08:23] or are somewhat like GPU marketplace stuff.

[00:08:26] They have somewhat like security vulnerabilities.

[00:08:29] They don't necessarily allow you to completely offer what you want.

[00:08:35] Some of them will just force you to like put up your highest power limit on your machine,

[00:08:39] which you may maybe just not have like adequate cooling at your facility because you just intended mining at first.

[00:08:47] At Chlor, it's okay.

[00:08:49] Our marketplace is also split on like mainline marketplace and what we call power efficient marketplace.

[00:08:56] Okay.

[00:08:56] If you don't, if your machine cannot reach the like defaults, it's okay.

[00:09:02] Your machine is delisted from the mainline marketplace.

[00:09:04] It appears on the power efficient marketplace.

[00:09:07] So the AI grade users, most of them will ignore your machine and your machine will somewhat,

[00:09:13] somewhat like stay for these like more cryptocurrency, speculative miners, cryptocurrency developers

[00:09:19] and like some small AI models mostly.

[00:09:25] And yeah, why you should join?

[00:09:27] Basically right now we have most of miners use High Voice, right?

[00:09:32] And we have, we call it Chlor Fleet.

[00:09:36] So you can actually generate a config for miner.

[00:09:40] You can put it into Hive OS and your machine will automatically appear on Chlor AI.

[00:09:47] Itself, there is like no disadvantage.

[00:09:49] If you can set like minimum price, what you want to receive for your machine

[00:09:53] and like set overclocks, set power limits, set everything.

[00:09:58] And if Chlor doesn't find new client that will run this, then it's okay.

[00:10:04] Your machine is still mining as it was previously, no harm to anything.

[00:10:08] So as a provider, you can only like yield more money.

[00:10:11] You cannot lose.

[00:10:14] Got it.

[00:10:15] And so their biggest competition right now,

[00:10:18] your client's biggest competition would be meme coins.

[00:10:23] What would it be?

[00:10:26] What do you mean like by clients right now?

[00:10:29] Or the people who use 4.ai, right?

[00:10:33] Your customers.

[00:10:34] My customers, like, I don't get like how meme coins can be like a competition

[00:10:40] to like a GPU marketplace.

[00:10:43] Okay, good.

[00:10:46] Just checking.

[00:10:47] So, um, cause I see, they seem to be a competition to everybody right now.

[00:10:52] So yeah.

[00:10:52] Yeah.

[00:10:53] Like I personally need to say, okay, they're the competition.

[00:10:56] Like you need to like think about the competition.

[00:11:00] You have like on one side, like the standard providers,

[00:11:03] which have like their own facilities with their own GPUs,

[00:11:06] which they generally come up with like higher prices.

[00:11:09] Plus if you want to rent something larger than like, yeah,

[00:11:13] you need to like sign up some contracts or something.

[00:11:16] So it's not like that feasible.

[00:11:18] And like in the crypto space, the other, other projects and like not,

[00:11:24] not crypto space, even like outside of crypto space.

[00:11:27] I, I couldn't say basically that we're not,

[00:11:32] not competing on like itself, the level.

[00:11:35] So we're like all a little different.

[00:11:38] And, uh, I haven't seen any project that, uh,

[00:11:42] came as close to core to just allow providers to list everything.

[00:11:49] And the demand, the market tells itself what the market wants.

[00:11:53] So if there is a demand for machines with specific settings, yeah,

[00:11:58] you can list them on core.

[00:12:00] Core is open to everything.

[00:12:02] Got it.

[00:12:03] Okay.

[00:12:04] So I want to find out then why it's really crucial to democratize GPU access

[00:12:13] and reduce reliance on centralized cloud-based systems.

[00:12:19] Well, the first thing, like centralized cloud-based system,

[00:12:24] it like doesn't happen for now, but like in the future,

[00:12:27] like I don't know, as AI becomes more and more popular

[00:12:30] and it depends in what country you're living,

[00:12:33] but some countries may be like creating, I don't know, censorship for AI models.

[00:12:37] They might try to restrict you from renting GPU computing power

[00:12:42] to run your AI model that might not be like aligned with the political agenda

[00:12:46] or something in like some countries.

[00:12:49] And itself, it makes sense because on other level,

[00:12:53] you're a centralized provider.

[00:12:55] You're trying to find out like clients for yourself.

[00:12:59] And sometimes you don't have everything rented.

[00:13:03] You wouldn't advertise like lower prices to not like if you like rent something for more

[00:13:09] for one client, you don't put publicly AI rent it for less

[00:13:13] because your other clients will be demanding less or something.

[00:13:16] We basically at Chlor, we just, we don't have such issues

[00:13:20] because we're not like the provider, the platform itself, the chain itself.

[00:13:25] It's basically just a place to interconnect these people.

[00:13:28] So we can even see like the centralized providers,

[00:13:30] they will come more and more to Chlor to list their unused equipment

[00:13:35] on the greatest marketplace itself.

[00:13:40] So theoretically, it's just like a philosophical for now,

[00:13:44] but like it will need to lead this direction

[00:13:47] that the centralized providers themselves would start listing more

[00:13:51] and more their equipment on Chlor.

[00:13:54] Got it.

[00:13:56] So one of the things-

[00:13:57] Why is it important?

[00:13:59] Basically, no censorship.

[00:14:00] It's hard to feel like AI will always work.

[00:14:05] Decentralized clouds will always work.

[00:14:09] There, I cannot like say like everything is just sunshine and rainbows for now.

[00:14:15] Like the one, the one problematic thing in decentralized computing

[00:14:20] is that like some models still are not,

[00:14:25] or some workloads cannot or very hardly can be encrypted,

[00:14:29] like fully homographically encrypted.

[00:14:33] So for some workloads,

[00:14:34] I still need to say like having your own machines

[00:14:37] or renting from a centralized provider,

[00:14:39] it just for now makes more sense

[00:14:42] until like the encryption gets better on some workloads

[00:14:46] or you just maybe don't want to risk it.

[00:14:49] But that's fine.

[00:14:50] That's the market.

[00:14:51] I like everything year after year,

[00:14:54] we see like everything moving.

[00:14:56] We see ZK Snarks.

[00:14:58] We see like everything moving.

[00:15:00] We see like encrypted models.

[00:15:03] We see like everything moving this direction,

[00:15:05] which will be favoring the decentralized,

[00:15:08] democratized GPU clouds.

[00:15:12] I remember back in those,

[00:15:15] I think it was June,

[00:15:16] July and August of this year

[00:15:19] where I was doing like eight podcasts a week.

[00:15:22] And I would talk about the same topic

[00:15:24] with different founders in,

[00:15:27] you know,

[00:15:27] successive weeks

[00:15:28] and the answers would be completely different.

[00:15:31] You know,

[00:15:32] like what they told me was completely different.

[00:15:34] But one of the things that I think makes sense

[00:15:37] is this idea that you guys do

[00:15:40] is this idea of on-demand rentals, right?

[00:15:44] And you mentioned the rentals

[00:15:46] and leveling of the playing field, right?

[00:15:49] How do your rentals level the playing field

[00:15:52] and really help create competition

[00:15:55] and enhance the competition?

[00:15:58] Well, like,

[00:16:00] yeah, it's very like easy on itself,

[00:16:04] like how it helps the competition.

[00:16:06] Like basically if the demand is,

[00:16:10] I don't know,

[00:16:11] going down somewhat,

[00:16:12] the providers are themselves

[00:16:13] always like incentivized.

[00:16:15] Okay, let's just

[00:16:17] lower our prices or something

[00:16:19] because like the providers

[00:16:20] always want to like harvest

[00:16:22] the most they can

[00:16:24] compared to like their electricity costs,

[00:16:26] their profits minus

[00:16:28] their revenue minus electricity costs.

[00:16:30] The providers are always like incentivized

[00:16:33] to like, yeah,

[00:16:34] yield the best terms.

[00:16:37] So the providers themselves

[00:16:38] are always like incentivized

[00:16:39] go to places

[00:16:40] with the cheapest electricity

[00:16:42] and for AI workloads,

[00:16:45] it's like some balance

[00:16:46] between like electricity costs

[00:16:48] and internet connection actually.

[00:16:50] So you cannot like set up

[00:16:52] that successful chlorhost.

[00:16:55] I don't know in Kyrgyzstan

[00:16:56] where you basically have like

[00:16:58] no high internet connection

[00:17:00] in the whole country.

[00:17:02] But like, yeah,

[00:17:03] it's some playing field

[00:17:04] and chlor allows basically

[00:17:05] everyone to be on one marketplace,

[00:17:07] everyone compete in one space

[00:17:09] and it just drives,

[00:17:11] it drives the efficiency

[00:17:12] on all levels.

[00:17:15] C drive efficiency.

[00:17:16] Okay.

[00:17:17] Makes sense.

[00:17:18] You said oracles too.

[00:17:20] So I'm thinking,

[00:17:21] when I think of oracles,

[00:17:22] I think of blockchain oracles.

[00:17:24] You know,

[00:17:25] you probably misheard.

[00:17:27] I didn't say oracles.

[00:17:29] Okay.

[00:17:30] I misheard.

[00:17:31] I'm sorry.

[00:17:32] So,

[00:17:34] great.

[00:17:35] So,

[00:17:35] one of the things

[00:17:36] that we think about

[00:17:37] when we think about

[00:17:37] the tech evolution

[00:17:38] because you've been here

[00:17:39] since 2017

[00:17:40] and so have I,

[00:17:41] which means that you probably

[00:17:42] read a whole bunch

[00:17:43] of white papers

[00:17:44] that made no sense

[00:17:45] and so did I,

[00:17:47] right?

[00:17:47] We have to make this

[00:17:48] tech evolution

[00:17:49] more sustainable,

[00:17:50] right?

[00:17:51] So,

[00:17:53] how do we make it

[00:17:54] more sustainable

[00:17:55] by tapping into

[00:17:56] unused compute power

[00:17:58] and then what's

[00:17:59] the ideal level

[00:18:00] of compute power

[00:18:01] to achieve sustainability

[00:18:02] and keep it there?

[00:18:06] What do you mean?

[00:18:07] Like,

[00:18:07] economic sustainability

[00:18:08] probably,

[00:18:09] right?

[00:18:09] Yeah.

[00:18:10] Yeah.

[00:18:11] Well,

[00:18:12] right now,

[00:18:13] okay,

[00:18:13] just on the

[00:18:14] computing power,

[00:18:16] the general computing

[00:18:17] power industry

[00:18:18] is,

[00:18:19] okay,

[00:18:20] like super old.

[00:18:21] We know it's like

[00:18:21] profitable.

[00:18:22] It makes sense.

[00:18:24] GPU computing power

[00:18:25] industry,

[00:18:26] yes,

[00:18:27] the providers

[00:18:28] right now

[00:18:29] make money.

[00:18:30] Okay,

[00:18:30] we basically know

[00:18:31] that,

[00:18:31] like,

[00:18:32] if you're

[00:18:32] in the provider

[00:18:33] space,

[00:18:34] you're generating

[00:18:35] profits.

[00:18:36] But,

[00:18:37] the problematic

[00:18:38] is the clients.

[00:18:40] The clients

[00:18:40] are sometimes,

[00:18:41] a lot of the times,

[00:18:43] like VC funded

[00:18:43] companies that

[00:18:44] are basically

[00:18:45] losing money

[00:18:46] right now,

[00:18:46] yes.

[00:18:47] companies that

[00:18:48] and so,

[00:18:49] it's hard to,

[00:18:50] like,

[00:18:50] predict what

[00:18:51] will be happening.

[00:18:52] But,

[00:18:52] like,

[00:18:53] I think with

[00:18:54] the advancements

[00:18:55] in,

[00:18:56] like,

[00:18:56] if I just want to

[00:18:58] go to mention

[00:18:58] the two largest

[00:18:59] players or

[00:19:00] players I'm most,

[00:19:01] like,

[00:19:02] interested in,

[00:19:03] like,

[00:19:03] anthropic AI

[00:19:04] and open AI,

[00:19:05] we see them

[00:19:06] basically scaling

[00:19:07] down,

[00:19:08] like,

[00:19:09] parameters of

[00:19:10] models,

[00:19:10] but,

[00:19:11] like,

[00:19:12] the demand

[00:19:13] is still,

[00:19:13] like,

[00:19:13] rising in such

[00:19:14] somewhat,

[00:19:15] like,

[00:19:16] insane terms.

[00:19:17] And,

[00:19:17] yeah,

[00:19:18] I personally

[00:19:18] think it will

[00:19:19] bring,

[00:19:20] it will break,

[00:19:22] it will start

[00:19:23] to be,

[00:19:24] start to be

[00:19:25] profitable.

[00:19:25] Because with

[00:19:26] the demand

[00:19:27] for AI

[00:19:28] rising at

[00:19:29] this speed,

[00:19:30] I don't see it

[00:19:31] slowing down.

[00:19:32] and,

[00:19:33] yeah,

[00:19:34] the demand

[00:19:35] for AI,

[00:19:37] regarding

[00:19:38] sustainability,

[00:19:39] like,

[00:19:39] you know,

[00:19:40] the clients

[00:19:41] are always,

[00:19:42] like,

[00:19:43] step ahead,

[00:19:44] like,

[00:19:45] not,

[00:19:45] like,

[00:19:46] against,

[00:19:46] like,

[00:19:47] the provider

[00:19:47] because they

[00:19:48] want also

[00:19:49] to harvest

[00:19:50] more,

[00:19:51] the most

[00:19:52] generate

[00:19:53] the data

[00:19:54] for computing

[00:19:54] power.

[00:19:55] So,

[00:19:55] they are always,

[00:19:56] like,

[00:19:56] trying to,

[00:19:57] like,

[00:19:57] make this,

[00:19:58] like,

[00:19:59] making more

[00:20:00] efficient,

[00:20:01] faster,

[00:20:01] if a person

[00:20:02] just looks,

[00:20:03] for example,

[00:20:03] at the LLM

[00:20:05] framework,

[00:20:06] they,

[00:20:07] in,

[00:20:07] like,

[00:20:07] past two years,

[00:20:08] they basically

[00:20:09] were able to

[00:20:10] achieve,

[00:20:11] like,

[00:20:11] I don't know,

[00:20:11] 2x improvements,

[00:20:13] which,

[00:20:14] okay,

[00:20:14] when you're

[00:20:14] looking at

[00:20:15] yourself as a

[00:20:16] provider,

[00:20:16] you're saying,

[00:20:17] okay,

[00:20:17] this is,

[00:20:18] this is not

[00:20:18] that,

[00:20:18] like,

[00:20:19] great,

[00:20:19] because you

[00:20:20] have,

[00:20:20] 2x less

[00:20:22] demand,

[00:20:22] actually,

[00:20:23] but,

[00:20:23] like,

[00:20:25] the demand

[00:20:25] for the AI

[00:20:26] models is

[00:20:27] rising at

[00:20:28] such rapid

[00:20:28] pace that,

[00:20:29] like,

[00:20:29] it basically,

[00:20:30] like,

[00:20:30] doesn't matter

[00:20:31] and,

[00:20:33] from,

[00:20:33] yeah,

[00:20:34] it makes,

[00:20:35] makes sense

[00:20:36] that demand

[00:20:36] is now

[00:20:37] rising,

[00:20:38] even,

[00:20:38] like,

[00:20:38] for the

[00:20:38] VC-funded

[00:20:39] companies,

[00:20:40] let's say,

[00:20:40] and,

[00:20:41] I just

[00:20:42] believe they

[00:20:43] will be able

[00:20:43] to hit

[00:20:44] profitability

[00:20:44] and not

[00:20:45] the large

[00:20:46] issues.

[00:20:49] I think it's

[00:20:50] interesting you

[00:20:50] mentioned VCs.

[00:20:53] I went to

[00:20:53] Consensus in

[00:20:55] Austin in May

[00:20:57] and there

[00:20:57] and there

[00:20:58] were panels

[00:20:58] of VCs

[00:20:59] and getting

[00:21:00] their thoughts

[00:21:01] on AI.

[00:21:03] None of

[00:21:04] them agreed

[00:21:05] and several

[00:21:06] of them

[00:21:07] didn't know

[00:21:07] what they

[00:21:07] were looking

[00:21:08] at.

[00:21:08] You know,

[00:21:09] I'm not

[00:21:09] going to

[00:21:09] name names.

[00:21:10] Yeah.

[00:21:11] So,

[00:21:12] how do we

[00:21:13] create a

[00:21:14] more

[00:21:16] understanding,

[00:21:17] you know,

[00:21:18] field for

[00:21:19] VCs or

[00:21:20] how do we

[00:21:21] educate them

[00:21:21] on what

[00:21:22] they should

[00:21:22] look at

[00:21:25] and,

[00:21:25] or is

[00:21:27] someone else

[00:21:27] better equipped

[00:21:28] to be able

[00:21:29] to do that?

[00:21:30] You know,

[00:21:32] that's like

[00:21:32] kind of

[00:21:33] problematic

[00:21:34] thing,

[00:21:34] of course,

[00:21:35] like VCs,

[00:21:36] if their

[00:21:37] executives

[00:21:37] don't

[00:21:38] understand

[00:21:38] what they

[00:21:39] are investing

[00:21:39] in,

[00:21:40] they probably

[00:21:40] shouldn't

[00:21:41] do it,

[00:21:42] right?

[00:21:43] But,

[00:21:43] how we

[00:21:45] educate them,

[00:21:46] it like,

[00:21:47] it's like

[00:21:48] somewhat,

[00:21:48] okay,

[00:21:49] AI is still

[00:21:49] we can,

[00:21:50] we can,

[00:21:51] the risk

[00:21:51] is lowering

[00:21:53] in the

[00:21:53] industry.

[00:21:55] By my

[00:21:56] standpoints,

[00:21:57] I believe

[00:21:59] if,

[00:22:00] for VCs,

[00:22:01] it's probably

[00:22:02] the best

[00:22:02] to try

[00:22:03] if some

[00:22:04] of the

[00:22:05] larger

[00:22:05] companies

[00:22:06] open up

[00:22:06] fundraisers.

[00:22:08] Right now,

[00:22:09] it's better

[00:22:09] to bet

[00:22:10] on the

[00:22:10] larger

[00:22:10] companies

[00:22:11] than like

[00:22:12] some

[00:22:12] smaller

[00:22:13] startups.

[00:22:14] If some

[00:22:14] smaller

[00:22:15] startup

[00:22:15] claims they

[00:22:17] are training

[00:22:17] their own

[00:22:18] foundation

[00:22:19] model

[00:22:19] from

[00:22:21] ZeroY,

[00:22:22] I would

[00:22:22] personally

[00:22:23] not invest

[00:22:24] in them

[00:22:24] because

[00:22:25] it is just

[00:22:26] losing

[00:22:27] its sense

[00:22:28] basically

[00:22:29] already

[00:22:30] open source

[00:22:30] foundation

[00:22:31] models

[00:22:31] are hitting

[00:22:32] really great

[00:22:34] outcomes

[00:22:35] and

[00:22:36] if you're

[00:22:37] it's just

[00:22:38] the competition

[00:22:39] in foundation

[00:22:40] models,

[00:22:41] in pure

[00:22:41] foundation

[00:22:41] models

[00:22:42] is

[00:22:42] really high

[00:22:43] and

[00:22:44] if you want

[00:22:45] to achieve

[00:22:46] something

[00:22:46] it's just

[00:22:46] better

[00:22:47] if you say

[00:22:48] I'll use

[00:22:48] this foundation

[00:22:49] model,

[00:22:49] we'll train

[00:22:50] it on

[00:22:50] this data

[00:22:51] that we

[00:22:51] have,

[00:22:51] we have

[00:22:52] some

[00:22:52] advantage

[00:22:52] as a

[00:22:53] company,

[00:22:54] we can

[00:22:54] do it.

[00:22:55] I would

[00:22:56] mostly

[00:22:56] avoid

[00:22:57] people

[00:22:57] claiming

[00:22:58] they will

[00:22:58] train

[00:22:59] foundation

[00:22:59] model.

[00:23:00] I would

[00:23:01] not

[00:23:01] avoid

[00:23:01] them

[00:23:01] if they

[00:23:02] have

[00:23:02] some

[00:23:03] revolutionary

[00:23:03] technology,

[00:23:04] but otherwise

[00:23:05] I would

[00:23:05] avoid this

[00:23:06] part of

[00:23:07] companies.

[00:23:09] that makes

[00:23:10] me think

[00:23:10] about what

[00:23:10] the best

[00:23:11] model is.

[00:23:12] Some will

[00:23:12] say the

[00:23:13] best model

[00:23:14] is federated

[00:23:16] learning with

[00:23:17] a much

[00:23:17] smaller

[00:23:17] data set.

[00:23:18] Yeah.

[00:23:20] What are

[00:23:21] your thoughts

[00:23:22] on what's

[00:23:22] more optimal

[00:23:24] and what's

[00:23:24] more ideal,

[00:23:25] a federated

[00:23:25] machine learning

[00:23:26] with a small

[00:23:27] data set or

[00:23:28] this massive

[00:23:28] encompassing

[00:23:29] sentiment

[00:23:30] style?

[00:23:31] we know

[00:23:32] basically,

[00:23:34] I think

[00:23:34] we've seen

[00:23:35] it on

[00:23:36] mixed model

[00:23:38] and I

[00:23:39] would believe

[00:23:40] because open

[00:23:41] AI and

[00:23:42] Anthropic

[00:23:43] is basically

[00:23:44] closed,

[00:23:44] so we

[00:23:45] don't exactly

[00:23:45] know what

[00:23:46] they are

[00:23:46] doing,

[00:23:47] but the

[00:23:48] lead in

[00:23:49] generating

[00:23:50] the best

[00:23:51] outputs for

[00:23:52] users is

[00:23:53] some mixture

[00:23:54] of smaller

[00:23:55] expert models

[00:23:56] in some

[00:23:57] fields.

[00:23:57] We've seen

[00:23:58] in Mimix,

[00:23:59] it makes

[00:23:59] great sense

[00:24:00] to have

[00:24:00] smaller

[00:24:01] expert

[00:24:01] models and

[00:24:02] one router

[00:24:03] between these

[00:24:04] models and

[00:24:07] RUG

[00:24:07] databases,

[00:24:08] which basically

[00:24:10] is,

[00:24:10] yeah,

[00:24:11] it has

[00:24:12] nothing to

[00:24:13] do with

[00:24:14] core,

[00:24:14] but I

[00:24:14] personally

[00:24:15] run an

[00:24:17] experiment

[00:24:17] for one

[00:24:18] company where

[00:24:18] we trained

[00:24:19] some model

[00:24:20] that was,

[00:24:20] it was like

[00:24:21] some competition

[00:24:22] basically,

[00:24:23] and we

[00:24:24] were needed

[00:24:26] to get

[00:24:26] as close

[00:24:28] as possible

[00:24:28] to Wikipedia

[00:24:29] answers at

[00:24:30] some point,

[00:24:31] so we

[00:24:31] just basically

[00:24:32] put search

[00:24:34] in Wikipedia

[00:24:35] the fastest,

[00:24:36] and we

[00:24:37] achieved the

[00:24:38] greater,

[00:24:38] the lowest

[00:24:39] loss basically

[00:24:40] by having a

[00:24:41] second RUG

[00:24:41] database,

[00:24:42] which was

[00:24:43] queried by

[00:24:44] the model

[00:24:44] and itself

[00:24:45] it pulled

[00:24:46] the data

[00:24:47] from it,

[00:24:47] so I think

[00:24:48] it's some

[00:24:49] mixture of

[00:24:50] expert models

[00:24:51] and models

[00:24:52] being able to

[00:24:53] navigate databases

[00:24:54] and pull the

[00:24:55] data on

[00:24:56] itself too,

[00:24:59] because

[00:24:59] training is

[00:25:01] lossy,

[00:25:02] we know it

[00:25:02] and when

[00:25:04] you're fitting

[00:25:05] a lot of

[00:25:07] petabytes of

[00:25:08] data into

[00:25:08] some model,

[00:25:09] yes,

[00:25:09] it's the

[00:25:10] best to

[00:25:10] fit as

[00:25:11] much data

[00:25:11] in a

[00:25:12] model as

[00:25:13] possible

[00:25:13] without

[00:25:14] overtraining

[00:25:14] it,

[00:25:15] but at

[00:25:16] some point

[00:25:17] the models

[00:25:18] that are

[00:25:19] able to

[00:25:19] navigate

[00:25:20] some

[00:25:20] databases

[00:25:21] and

[00:25:22] effectively

[00:25:22] pull out

[00:25:23] contexts from

[00:25:24] it,

[00:25:24] it makes

[00:25:25] the most

[00:25:25] sense

[00:25:26] for now.

[00:25:27] I'm not

[00:25:28] sure what

[00:25:28] future will

[00:25:29] bring,

[00:25:29] but I

[00:25:30] see it

[00:25:30] this way.

[00:25:32] That makes

[00:25:33] a lot of

[00:25:33] sense.

[00:25:34] So you

[00:25:35] talked about

[00:25:36] supply and

[00:25:36] demand,

[00:25:37] right?

[00:25:38] I look at

[00:25:40] the history

[00:25:40] of AI,

[00:25:42] which is a

[00:25:43] hot topic

[00:25:43] right now

[00:25:44] and seems

[00:25:45] to be

[00:25:46] recyclable

[00:25:47] every 20

[00:25:48] years,

[00:25:49] right?

[00:25:49] You said

[00:25:50] the demand

[00:25:50] has no

[00:25:51] sense of

[00:25:52] slowing down,

[00:25:52] but it did

[00:25:53] slow down

[00:25:53] in the

[00:25:53] past.

[00:25:54] It ebbs

[00:25:54] and flows

[00:25:54] over time.

[00:25:57] But the

[00:25:58] AI narrative

[00:25:59] over the

[00:26:00] 1960s,

[00:26:01] 1980s,

[00:26:02] and 2000

[00:26:02] never stuck,

[00:26:03] right?

[00:26:05] Why is

[00:26:06] this current

[00:26:06] cycle the

[00:26:08] one that

[00:26:08] will stick

[00:26:12] and what

[00:26:13] would it

[00:26:13] take to

[00:26:14] break the

[00:26:14] demand

[00:26:14] or increase

[00:26:16] the supply

[00:26:17] so you

[00:26:18] don't have

[00:26:18] wild action

[00:26:19] happening

[00:26:20] in the

[00:26:20] economic

[00:26:22] models?

[00:26:24] Yes.

[00:26:25] Well,

[00:26:25] previously

[00:26:26] we just

[00:26:27] haven't

[00:26:28] got this

[00:26:28] basically

[00:26:29] good AI

[00:26:31] models because

[00:26:31] we were

[00:26:31] basically

[00:26:32] limited by

[00:26:34] the computing

[00:26:34] power itself.

[00:26:36] We know

[00:26:36] linear algebra

[00:26:37] as humanity

[00:26:39] for a long

[00:26:40] time, but

[00:26:40] right now

[00:26:41] is the

[00:26:42] first time

[00:26:44] in history

[00:26:45] we somewhat

[00:26:45] achieved

[00:26:47] this quality

[00:26:48] of models

[00:26:49] just because

[00:26:49] we have

[00:26:50] the computing

[00:26:51] power.

[00:26:51] As we

[00:26:52] have the

[00:26:53] computing

[00:26:53] power,

[00:26:54] it starts

[00:26:55] making sense

[00:26:56] this might

[00:26:57] be possible

[00:26:58] to build

[00:26:58] because in

[00:27:00] the year

[00:27:00] 2000 it

[00:27:01] was nearly

[00:27:03] impossible.

[00:27:04] It would

[00:27:04] cost an

[00:27:06] insane amount

[00:27:07] of money and

[00:27:07] you couldn't

[00:27:08] see any

[00:27:08] probability on

[00:27:10] building any

[00:27:12] relatively cheap

[00:27:13] to operate

[00:27:14] model for

[00:27:15] anything.

[00:27:16] So it didn't

[00:27:17] make sense.

[00:27:17] in 2016

[00:27:19] or something

[00:27:21] when it

[00:27:22] was probably

[00:27:23] the first

[00:27:23] baby steps

[00:27:24] of creating

[00:27:25] some large

[00:27:26] language models

[00:27:26] which was

[00:27:28] the hottest

[00:27:29] topic from

[00:27:29] the start

[00:27:30] probably.

[00:27:31] Then you

[00:27:31] had

[00:27:31] transformers

[00:27:32] and you

[00:27:33] go on

[00:27:33] and you

[00:27:34] start building

[00:27:34] this architecture

[00:27:35] making it

[00:27:36] more efficient

[00:27:37] and you're

[00:27:38] getting more

[00:27:39] and more

[00:27:40] data.

[00:27:41] It's getting

[00:27:41] better and

[00:27:42] better.

[00:27:43] Because it

[00:27:44] finally was

[00:27:45] at some point

[00:27:46] that cheap

[00:27:47] that it

[00:27:47] could be

[00:27:47] somewhat

[00:27:48] pulled off

[00:27:49] and then

[00:27:49] it got

[00:27:50] more efficient

[00:27:52] efficient

[00:27:52] efficient

[00:27:53] efficient

[00:27:53] and then

[00:27:55] when you have

[00:27:55] models at

[00:27:56] such scale

[00:27:57] then the

[00:27:58] demand for

[00:27:59] them basically

[00:28:00] went up

[00:28:01] because you

[00:28:02] finally have

[00:28:03] some product

[00:28:04] that is

[00:28:04] in demand.

[00:28:06] I think

[00:28:06] the demand

[00:28:07] may not

[00:28:08] be from

[00:28:09] now

[00:28:09] just by

[00:28:10] the everyday

[00:28:12] people

[00:28:12] direct prompting

[00:28:14] GPT or

[00:28:14] something

[00:28:15] but it's

[00:28:15] some large

[00:28:16] backend

[00:28:17] systems

[00:28:17] that are

[00:28:18] verifying

[00:28:19] something

[00:28:19] like

[00:28:21] checking

[00:28:22] some data

[00:28:23] like

[00:28:23] processing

[00:28:24] some

[00:28:24] data

[00:28:24] like

[00:28:25] I don't

[00:28:25] know

[00:28:26] if you want

[00:28:26] to make

[00:28:27] some

[00:28:27] even in

[00:28:28] trading

[00:28:29] if you want

[00:28:29] to have

[00:28:30] some

[00:28:30] let's say

[00:28:31] analyze

[00:28:32] the market

[00:28:33] let's just

[00:28:33] look at

[00:28:34] a lot of

[00:28:34] things

[00:28:34] you just

[00:28:36] put up

[00:28:44] I'm

[00:28:44] somewhat

[00:28:44] addicted

[00:28:45] on

[00:28:45] Python

[00:28:46] I want

[00:28:46] to over

[00:28:47] analyze

[00:28:47] everything

[00:28:48] but it

[00:28:49] makes most

[00:28:50] sense

[00:28:50] it just

[00:28:50] saves

[00:28:51] time

[00:28:51] and right

[00:28:53] now

[00:28:53] it saves

[00:28:55] time

[00:28:55] from

[00:28:57] the demand

[00:28:58] I think

[00:28:59] where

[00:29:00] it just

[00:29:01] makes

[00:29:01] economic

[00:29:02] sense

[00:29:02] where we

[00:29:02] have

[00:29:03] demand

[00:29:03] for

[00:29:04] something

[00:29:04] that

[00:29:04] replaces

[00:29:06] human

[00:29:07] workers

[00:29:07] for

[00:29:08] cheaper

[00:29:08] it is

[00:29:10] no-brainer

[00:29:10] and more

[00:29:11] businesses

[00:29:12] are basically

[00:29:12] going to

[00:29:13] adopt it

[00:29:13] and it's

[00:29:17] getting

[00:29:17] better

[00:29:17] better

[00:29:18] and better

[00:29:19] and

[00:29:20] on the

[00:29:21] supply

[00:29:22] side

[00:29:22] I

[00:29:23] personally

[00:29:23] don't

[00:29:24] think

[00:29:24] there

[00:29:24] will

[00:29:24] be

[00:29:24] some

[00:29:25] squeeze

[00:29:26] that

[00:29:26] will

[00:29:26] make

[00:29:27] some

[00:29:27] insane

[00:29:28] thing

[00:29:28] that

[00:29:28] will

[00:29:29] make

[00:29:29] models

[00:29:30] like

[00:29:30] 1000

[00:29:31] times

[00:29:31] or even

[00:29:32] like 100

[00:29:32] times

[00:29:33] cheaper

[00:29:34] they are

[00:29:34] relatively

[00:29:35] cheap

[00:29:35] now

[00:29:36] but

[00:29:36] I

[00:29:37] don't

[00:29:37] see

[00:29:37] any

[00:29:38] insane

[00:29:39] shock

[00:29:39] to the

[00:29:40] market

[00:29:40] from the

[00:29:41] supply

[00:29:41] side

[00:29:41] coming

[00:29:42] that

[00:29:42] would

[00:29:42] somewhat

[00:29:42] kill

[00:29:43] the

[00:29:44] profitability

[00:29:44] of AI

[00:29:45] model

[00:29:45] developers

[00:29:46] in the

[00:29:47] future

[00:29:47] right

[00:29:47] right

[00:29:48] like even

[00:29:49] our

[00:29:49] technology

[00:29:50] as GPUs

[00:29:51] we just

[00:29:52] we're

[00:29:53] hitting some

[00:29:54] limits

[00:29:54] of the

[00:29:55] silicon

[00:29:55] itself

[00:29:56] where

[00:29:57] we'll

[00:29:58] not

[00:29:58] like do

[00:29:59] 100x

[00:30:00] scaling

[00:30:01] more

[00:30:01] slow

[00:30:02] basically

[00:30:02] doesn't

[00:30:03] apply anymore

[00:30:04] in my

[00:30:04] opinion

[00:30:06] got it

[00:30:07] you said

[00:30:08] a magic

[00:30:08] word

[00:30:09] there

[00:30:10] that's

[00:30:10] been

[00:30:11] consistent

[00:30:11] in

[00:30:12] everyone

[00:30:12] I've

[00:30:12] talked

[00:30:13] to

[00:30:13] and

[00:30:13] over

[00:30:14] time

[00:30:15] you

[00:30:16] said

[00:30:16] you

[00:30:16] said

[00:30:17] python

[00:30:18] very

[00:30:19] old

[00:30:19] language

[00:30:19] yeah

[00:30:22] is

[00:30:22] there

[00:30:23] what

[00:30:23] is

[00:30:24] what

[00:30:24] is

[00:30:24] it

[00:30:24] are

[00:30:25] the

[00:30:25] chances

[00:30:25] that

[00:30:27] python

[00:30:27] can be

[00:30:28] replaced

[00:30:28] and

[00:30:28] improved

[00:30:29] upon

[00:30:29] I

[00:30:30] haven't

[00:30:30] heard

[00:30:30] it

[00:30:31] yet

[00:30:31] so

[00:30:31] what

[00:30:31] would

[00:30:31] be

[00:30:32] able

[00:30:32] to

[00:30:39] make

[00:30:39] it

[00:30:40] better

[00:30:40] well

[00:30:41] I

[00:30:41] don't

[00:30:42] think

[00:30:42] it's

[00:30:42] necessarily

[00:30:43] a constraint

[00:30:44] like depends

[00:30:45] what you

[00:30:45] need to

[00:30:46] build

[00:30:46] of course

[00:30:47] if you're

[00:30:47] building

[00:30:47] some

[00:30:48] highly

[00:30:48] efficient

[00:30:49] applications

[00:30:50] you'll

[00:30:50] go and

[00:30:50] build it

[00:30:51] in a

[00:30:52] C

[00:30:52] basically

[00:30:53] like

[00:30:54] recently

[00:30:55] we see

[00:30:56] people

[00:30:56] are talking

[00:30:57] about

[00:30:57] Rust

[00:30:58] replacing

[00:30:59] python

[00:30:59] but

[00:31:00] python

[00:31:00] is

[00:31:01] basically

[00:31:01] so

[00:31:03] simplistic

[00:31:03] and

[00:31:04] I

[00:31:04] think

[00:31:04] that

[00:31:04] this

[00:31:05] is

[00:31:05] like

[00:31:05] somewhat

[00:31:05] the

[00:31:06] thing

[00:31:06] like

[00:31:07] driving

[00:31:08] it

[00:31:08] in

[00:31:08] this

[00:31:08] AI

[00:31:08] revolution

[00:31:09] something

[00:31:10] we have

[00:31:11] like

[00:31:11] pytorch

[00:31:13] tensorflow

[00:31:13] and you

[00:31:14] basically

[00:31:14] import

[00:31:14] these

[00:31:15] libraries

[00:31:15] into

[00:31:16] python

[00:31:16] they are

[00:31:16] extremely

[00:31:17] efficient

[00:31:18] using

[00:31:19] some

[00:31:19] C

[00:31:19] backends

[00:31:20] so

[00:31:20] but

[00:31:21] then

[00:31:21] the

[00:31:22] final

[00:31:22] adjustments

[00:31:23] to

[00:31:24] this

[00:31:24] you

[00:31:25] make

[00:31:25] in

[00:31:25] python

[00:31:25] and

[00:31:26] it's

[00:31:26] just

[00:31:26] like

[00:31:26] in my

[00:31:27] opinion

[00:31:27] still

[00:31:27] most

[00:31:28] programmer

[00:31:29] friend

[00:31:29] friendly

[00:31:30] even

[00:31:30] in

[00:31:30] like

[00:31:30] this

[00:31:31] AI

[00:31:32] ecosystems

[00:31:33] I

[00:31:34] like

[00:31:35] of course

[00:31:35] it

[00:31:35] can

[00:31:36] be

[00:31:36] replaced

[00:31:37] but

[00:31:38] it

[00:31:38] still

[00:31:39] in

[00:31:39] my

[00:31:39] opinion

[00:31:39] will

[00:31:39] be

[00:31:40] hanging

[00:31:40] around

[00:31:41] the

[00:31:42] number

[00:31:42] one

[00:31:42] spot

[00:31:43] basically

[00:31:43] not

[00:31:44] necessarily

[00:31:44] like

[00:31:44] number

[00:31:44] one

[00:31:45] but

[00:31:45] it

[00:31:45] will

[00:31:45] be

[00:31:45] hanging

[00:31:46] in

[00:31:46] the

[00:31:46] top

[00:31:46] end

[00:31:47] I

[00:31:47] personally

[00:31:48] don't

[00:31:48] find

[00:31:48] anything

[00:31:49] wrong

[00:31:50] with

[00:31:50] python

[00:31:50] being

[00:31:51] one

[00:31:51] of

[00:31:51] the

[00:31:51] top

[00:31:51] languages

[00:31:52] basically

[00:31:53] got

[00:31:53] it

[00:31:54] okay

[00:31:54] good

[00:31:54] to

[00:31:55] know

[00:31:55] all

[00:31:55] right

[00:31:55] so

[00:31:57] awesome

[00:31:57] so

[00:31:58] I

[00:31:58] want

[00:31:58] to

[00:31:58] thank

[00:31:58] you

[00:31:58] very

[00:31:58] much

[00:31:59] for

[00:31:59] speaking

[00:31:59] with

[00:31:59] me

[00:31:59] today

[00:31:59] I

[00:32:00] enjoyed

[00:32:00] learning

[00:32:01] about

[00:32:01] your

[00:32:01] company

[00:32:01] and

[00:32:03] I

[00:32:03] have

[00:32:05] yeah

[00:32:05] so

[00:32:05] thank

[00:32:05] you

[00:32:06] so

[00:32:07] I

[00:32:07] have

[00:32:07] one

[00:32:07] last

[00:32:07] question

[00:32:08] is

[00:32:10] actually

[00:32:11] yeah

[00:32:12] I

[00:32:13] have

[00:32:13] two

[00:32:13] last

[00:32:13] two

[00:32:13] actually

[00:32:14] I

[00:32:14] have

[00:32:14] one

[00:32:14] more

[00:32:14] question

[00:32:14] is

[00:32:15] how

[00:32:15] can

[00:32:15] people

[00:32:15] find

[00:32:16] more

[00:32:16] information

[00:32:16] about

[00:32:17] you

[00:32:18] about

[00:32:19] or

[00:32:19] how

[00:32:20] can

[00:32:20] they

[00:32:21] start

[00:32:21] to

[00:32:21] use

[00:32:21] your

[00:32:22] product

[00:32:23] service

[00:32:23] how

[00:32:23] can

[00:32:23] they

[00:32:23] do

[00:32:24] that

[00:32:24] well

[00:32:25] okay

[00:32:26] about

[00:32:26] me

[00:32:27] I

[00:32:27] personally

[00:32:28] don't

[00:32:28] think

[00:32:29] I'm

[00:32:29] like

[00:32:29] necessarily

[00:32:30] important

[00:32:31] person

[00:32:31] people

[00:32:32] just

[00:32:32] should

[00:32:33] look at

[00:32:33] the

[00:32:33] product

[00:32:33] itself

[00:32:34] the

[00:32:34] product

[00:32:34] is

[00:32:34] based

[00:32:35] mostly

[00:32:36] representing

[00:32:36] what

[00:32:38] we do

[00:32:38] basically

[00:32:39] and

[00:32:39] like

[00:32:39] it's

[00:32:40] like

[00:32:40] one

[00:32:40] of

[00:32:40] the

[00:32:40] first

[00:32:41] thing

[00:32:42] that

[00:32:42] I

[00:32:42] basically

[00:32:42] do

[00:32:43] publicly

[00:32:43] that

[00:32:43] isn't

[00:32:44] me

[00:32:44] just

[00:32:44] like

[00:32:44] trading

[00:32:45] some

[00:32:45] stuff

[00:32:46] basically

[00:32:46] like

[00:32:47] making

[00:32:47] some

[00:32:47] analysis

[00:32:48] on

[00:32:48] something

[00:32:49] right

[00:32:50] and

[00:32:51] yeah

[00:32:51] about

[00:32:52] Chlor

[00:32:52] you can

[00:32:53] visit

[00:32:53] the

[00:32:54] Chlor

[00:32:54] website

[00:32:55] and

[00:32:56] docs.chlor

[00:32:58] AI

[00:32:58] it's

[00:32:59] also

[00:32:59] linked

[00:32:59] on

[00:33:00] the

[00:33:00] website

[00:33:01] people

[00:33:01] can

[00:33:01] read

[00:33:02] these

[00:33:02] two

[00:33:02] pages

[00:33:03] they

[00:33:03] can

[00:33:03] read

[00:33:03] a lot

[00:33:04] about

[00:33:04] Chlor

[00:33:05] but

[00:33:06] like

[00:33:06] you

[00:33:06] know

[00:33:06] itself

[00:33:07] if

[00:33:08] you're

[00:33:09] a

[00:33:09] crypto

[00:33:09] miner

[00:33:10] a

[00:33:10] GPU

[00:33:10] crypto

[00:33:11] miner

[00:33:11] you

[00:33:11] can

[00:33:12] yield

[00:33:12] from

[00:33:13] Chlor

[00:33:13] you can

[00:33:13] put

[00:33:14] your

[00:33:14] machines

[00:33:15] Chlor

[00:33:15] yeah

[00:33:17] regarding

[00:33:17] like

[00:33:18] AI

[00:33:18] developers

[00:33:19] people

[00:33:19] needing

[00:33:20] to run

[00:33:20] AI

[00:33:21] it's

[00:33:22] you

[00:33:22] when

[00:33:23] Chlor

[00:33:24] AI

[00:33:25] marketplace

[00:33:25] in my

[00:33:26] opinion

[00:33:28] it's

[00:33:28] so

[00:33:29] simple

[00:33:29] to use

[00:33:30] that

[00:33:30] if

[00:33:31] you're

[00:33:31] running

[00:33:31] some

[00:33:31] AI

[00:33:32] you'll

[00:33:32] know

[00:33:33] what to

[00:33:33] do

[00:33:33] when

[00:33:33] you

[00:33:33] see

[00:33:34] it

[00:33:35] got

[00:33:36] it

[00:33:36] awesome

[00:33:36] thank

[00:33:37] you

[00:33:37] very

[00:33:37] much

[00:33:37] for

[00:33:38] your

[00:33:38] time

[00:33:38] today

[00:33:38] yeah

[00:33:39] I thank

[00:33:40] you

[00:33:40] also

[00:33:40] nice

[00:33:41] interview

[00:33:42] basically

Digital transformation broadcast network

Follow Us on LinkedIn

Follow us on LinkedIn and be part of the conversation!

Powered by