Episode Transcript
[00:00:00] Speaker A: Hey everyone and welcome to the 255th episode of objectively speaking. I'm Jag. I'm the CEO of the Atlas Society. I'm super excited to have Gil Verdun join us today. He is the founder of Xtropic, a startup tech company working on a new kind of computer chip to make artificial intelligence processing faster and more energy efficient. Also as based Beth Jesus on X. Gill's also the founder of the Effective Acceleration, or E ACC movement, which promotes rapid, unregulated technological progress, particularly in AI to maximize human potential and economic growth. And I'm also very thrilled to announce that he is the keynote speaker at next week's Galt's Gulch in Austin. Gil, thanks for joining us.
[00:00:52] Speaker B: Yeah, thanks for having me. It's great to be here.
[00:00:55] Speaker A: So you grew up in Canada and historian Historically ex Canadians are over represented among the ranks of objectivists. Do you feel that growing up in a place that was infused with kind of egalitarian ethic and an embrace of a bigger role for government influenced your later views on regulation and bureaucracy or did those views evolve over time?
[00:01:23] Speaker B: Certainly I, I would say growing up, you know, I grew up in Montreal, Quebec. It has I guess, one of the highest densities of language laws in the world.
You know, for example, you're supposed to go to, you're not supposed to learn English too early on. They want to, they want to keep you around paying taxes for life. And so they have all these extra laws to regulate how you, literally how you speak or how you write, like what you could put on your menu, on your, you know, your packaging and so on. And you know, that's just the tip of the iceberg in terms of like how you just feel like the weight of all this government overreach and this bureaucracy hanging over your head growing up. And that sort of creates a sort of back reaction eventually and I guess.
Atlas SHRUGGING yeah, that's right, that's right.
And so, you know, in my case that was, you know, really feeling my whole life like I belonged more in the United States, you know, ideologically and yearning to go south. And here I am, but ultimately at least trying to escape the sort of sort of trap there in Canada. I would also say that there's a sort of, it's not just like top down and forced eventually once people become demoralized, that there's sort of no freedom for it to capture infinite upside. There's a sort of tall poppy syndrome that, that, that seeps in where, you know, if you, if you try to be too ambitious and so on.
People are like, aren't you happy? Why don't you just like have a median job? Or why are you too ambitious? Don't, don't rock the boat.
And that sort of mentality was definitely not a fit for me.
And so it's sort of self selecting. You know, the Canadians that are sort of okay with that, you know, they stay there and then the ones that are more ambitious and want maximal freedom and, and to be able to capture the upside of, of what they create, the value they create for the world, they tend to, to move to the United States, which, you know, is, is the flagship when it comes to freedom and we should keep it that way.
[00:03:50] Speaker A: Yeah, agreed. You know, actually Peter Copsis, who is one of our trustees here at the Atlas Society, he was one of the co founders of the Apollo Fund, he and his wife were very ambitious. They wanted not to live mediocre lives and, and after reading Atlas Shrugged, that is when they decided to also shrug and come to America.
So let's talk a little bit about quantum computing. How did you get into it and even move beyond it before most of us even knew it was a thing?
[00:04:25] Speaker B: Right, yeah, it's been, it's been quite the, quite the journey.
So originally I was, I was trying to understand the universe. Trying to, you know, I, I was never, you know, growing up, I was, you know, I guess like in, in Quebec, you know, there's sort of echoes of sort of a Catholic sort of authoritarianism in general. They tell you how you're supposed to speak and what you're supposed to think.
And for me, I think, you know, there was a big back reaction. I didn't trust authorities for answers and I had to rethink everything from first principles. And I wanted to learn the first principles and reconstruct how the universe works for myself. Right. So that I can, I can trust in my first principles, inference. And so that sort of, again, I would say due to where I grew up and, and sort of the sort of my back reaction to sort of authoritarians thinking, I, I wanted to understand how the universe works. So I became a theoretical physicist. I was working on black holes and quantum information, quantum cosmology. How did the universe begin? How may it end?
What are sort of the limits of physics?
What are the limits of space, time?
And naturally I started trying to understand how nature computes and viewing the universe as a computer.
And to me this is the most promising path to understand all of physics through a single lens. It's the it from bit Sort of school of thought now called it from Qubit.
And I think there naturally there was a bridge to try to reverse engineer nature and work with computers that are computing in a way that is physics based. So a quantum computer is a computer that leverages quantum mechanics to do certain operations and essentially allows us to understand pieces of nature that are operating quantum mechanically. So they're, they're not deterministic, they're not in a probability over states, they're in a superposition. You could think of it as parallel universes.
And so to me, more precisely, I was, I was kind of a specialist in, in quantum AI or quantum machine learning, arguably one of the pioneers of the field. Some of the first algorithms that later got me noticed by Google.
And there the idea was to take inspiration from how black holes compress information. They are the most efficient compression in the world and in the universe. And essentially could we inspire ourselves in terms of algorithms we could run on a quantum mechanical computer to have an optimal compression algorithm. And another name for AI is machine learning.
And you could phrase most of machine learning as learning compressions, compressed representations of the world. If you install the chatgpt or GROQ sort of model would be, maybe it'd fill out your hard drive, but essentially you'd have an approximate backup of the whole Internet.
And so that led me down the path to pioneering AI on quantum computers and eventually getting approached by Google and going to build a product there known now as TensorFlow quantum and later on working for Sergey Brin on all sorts of special projects including quantum sensors and quantum Internet.
Over time I realized that quantum technology was sort of, sort of technology similar to nuclear fusion. You know, there's a sort of break even point we call fault tolerance where what you, what you get out is more than what you put in, in terms of computation. And I see a path for that, but that path was on a far too long timescale for my, my impatient self. And I ended up jumping to, if you will, you know, detecting that there's an opportunity in something akin to nuclear fission versus nuclear fusion. Something we can do right now that arguably is more scalable immediately and gets us a sort of energy density gain that is similar to the energy efficiency and density gain of going from TNT to the nuclear bomb. Right. And so that is, that is what I set out to do three years ago approximately. And now we are having the first results and are scaling our approach of what we call thermodynamic computing. With my company now Extropic and I'm sure we'll go into more detail.
[00:09:33] Speaker A: We're going to get into that in a minute. But I don't want to leave Google. But because you arrived at Google in 2019, that was about a year after James de Mory became one of the first very high profile victims of cancel culture after he was fired for writing the fateful Google memo, officially titled Google's ideological Echo Chamber. Are you familiar with that episode and how did it square with your experience at Google?
[00:10:07] Speaker B: Yeah, I mean, you know Google is a great organization. I don't think it's homogenous ideologically but certainly having an engineer try to point out something in statistics and having an opinion and him getting completely canceled was sort of a warning shot for anybody else who would try to voice an opinion. That wasn't the median or the mode of the the population within the walls. I would say that that just created a sort of shadow network of like engineers that are kind of, you know, some things don't make sense, some things that are prescribed top down don't make sense. And it's not just, I think it's across big tech players. There's just kind of an ideological capture that we saw across the board I think in the the mid 2010s and again it didn't seem like to me it's just always suboptimal whenever there's one culture that captures everything and is sort of self reinforcing and there's no ability to discuss ideas. I personally was more interested in the discussion internally and externally of should big tech and the top AI institutions in the world work with the government and defense sector to put the right technology in the right hands so that we have national security. And that was also something that there was again in homogeneity within the organization. There was Project Maven 1.0 where Google tried to work on AI for defense and there was the sort of camp that cancels people tried to cancel Google itself I guess and walked out.
And to me I thought it was very unfortunate that the premier organization in the world for AI research. I would say mid to end of 2010s was the golden era for Google. Everybody who mattered was there for research and that's what brought me there. And I had a great time research wise they invented the transformer which led to the prosperity we see today amongst others, amongst many other things.
But to me the fact that there is an ideological sort of echo chamber and that for example it converged that we can't discuss various social things or various national security interests and we weren't mature enough to have an open discussion. And there was not a free market of ideas internally. I thought that was suboptimal for the growth of the company.
And that's just, you know, things on the outside, I think, on the inside, you know, like in general, I mean, this happens a lot. Sort of opinions sort of crystallized across allowed opinions, and it could be across technical opinions as well.
I would say big orgs tend to suppress ideas at scale because if. If, you know, every smart engineer they hire, they have a hundred thousand plus, you know, they went with everyone's idea, they go in all sorts of directions, but. But then sometimes they overdo it and sometimes they even miss big opportunities, like AI for AI for coding.
Google invented the transformer. They could have captured the market. They didn't. And that was an artifact of a bubble of opinion. So it's not just social downside. There's actual shareholder value impact to having echo chambers. And so to me, it was just a lesson that actually having a free market of ideas and being very open and encouraging a variance of ideas to flourish and be considered was paramount to the functioning of any society or large organization. Right. Google itself is a giant org. It's a sort of microcosm. It's a bubble in itself.
But for it to function well, it needs a free marketplace internally.
And so certainly that episode and my experiences, again, more on the defense side, I think, opened my eyes to the importance of culture. And that's kind of what led me to start voicing initially opinions or ideas and prototyping them anonymously and eventually sort of starting this sort of movement for a free marketplace of ideas and freedom of speech, freedom of thought, and freedom of. And we'll get into the AI part as well, but that was the EAC movement, which, you know, I started pretty much after leaving Alphabet.
[00:14:52] Speaker A: So you and I met last fall at the XPRIZE Visioneering Summit. Peter Diamandis, of course, has been a guest on this podcast and an honoree at our previous gala as someone who has read Atlas Shrugged five times and considers it his bible. Now, at the summit, you were pitching a prize to solve the challenge of making computing more efficient in order to meet the coming energy demands of AI.
[00:15:26] Speaker B: So.
[00:15:26] Speaker A: So tell us a little bit about your vision for that and how it ended up.
[00:15:33] Speaker B: Yeah, I mean, originally, you know, again, I think just, you know, even for technological progress, getting caught into a local mode of thinking prescribed by authorities or establishments is a bad idea. And I think we've built all of this technology based on silicon operating deterministically for many decades now. And so there's a lot of inertia and skepticism that any massive disruption could just surprise us and be around the corner.
And so for me it's been an exercise in having an extremely contrarian thesis, but again derived from first principles of physics and then, you know, taking a lot of heat for having such a contrarian thesis and then now starting to show or correct.
And it's been very validating. And you know, the benefits to society are going to be massive as long as we, as we keep scaling. And so in our case we looked at essentially what is a computation. At the end of the day, it's a thermodynamic process.
You have some input distribution over inputs and you have a distribution over outputs. And you could phrase many, many algorithms that run the Internet this way, including AI algorithms because this process is between something that's probabilistic, that's not a deterministic input, and a deterministic output.
You can run computations probabilistically. And if you look at the, the physics of electrons and of matter, you know, when you, when you get, when you go small enough, things are sort of jiggly and they're non deterministic, you don't know where each particle is, right? And usually that's a problem for a deterministic computer. So we like to filter out that noise and that costs us a lot of energy.
But instead we decided to start using that noise in order to run the algorithms more naturally. And, and the energy efficiency gains you get and the density, even spatially you get gain you get is over 10,000x, right? And so, and that's pretty, that's pretty massive.
Again, that's on the order of going from TNT to the nuclear bomb.
So I'd like to say we're sort of Manhattan Project for AI. But again, I thought that, I didn't think a big lab or a government lab would move fast enough to execute on this technology and decided to do it in the private sector.
And I guess I can, I can show our progress so far. We've, it's been a few chips now, but we've now achieved, oops, maybe. There you go.
We have a chip that is in silicon and operates at room temperature and is a, a computer that's between a thousand and ten thousand acts more energy efficient for these probabilistic algorithms that underlie so much of what we do.
And so to us, this is the most important thing we could be working on because it's one thing to produce More energy, which is what I talk about with the EAC movement climbing the Kardashev scale. We'll get into that.
But it's another thing to make how you use the energy and turn the energy into intelligence or the energy into value more efficient.
And that's what we're solving. And so, and if we don't solve.
[00:19:13] Speaker A: This problem, like I remember there was a bit debate, you know, it's a competition, somebody says we need to solve, you know, ovarian cancer, somebody says we need to save the whales.
And you were saying that basically, you know, unless we get this done right, we're going to be limited in how we're able to solve America, the world's other great challenges to human humanity.
[00:19:40] Speaker B: Really it's kind of we're solving the problem that solves other problems, right? If you have really potent artificial intelligence and you have a lot of intelligence per watt, then you can just apply that intelligence to solve your other problems, right? And so from a point of like maximal leverage, this is the problem we should focus on in order to solve the other ones. Right. And that was my thesis and you know, I think I was, I was very correct when pitching at the X Prize that you know, the market poll is so strong for artificial intelligence that we were going to start building massive nuclear reactor based build outs for these AI clouds. And that's been the case. And you just keep hearing news that more clouds with tons of tons of reactors are getting built. And to me that it's good, it's good that we're creating more energy, but at the same time it's not the most energy efficient way to do it. And again it's not a question of just like for the environment. It's also just from a return on investment capital. It just makes a lot more sense to invest or to invest in build outs. You know, if you're spending a trillion dollars, you want it to last. And if five years from now there's a technology that makes your current technology somewhat irrelevant, then you know, that was, that was a risky investment, right. And so I think, you know, both from a philanthropic standpoint, so like making sure that, you know, intelligence is abundant and cheap for everyone and everyone has access. But also I'll get into it like I think, I think in order to avoid having an over centralization of AI and AI being controlled by very few parties that then can prescribe what we think and what we say, I think a densification of intelligence in terms of energy efficiency and spatial density is necessary in order to Maintain individuality. In the era, in the era of AI, in the era where we augment our own cognition with sort of external AI agents, we need to have personalized AIs that we own and control personally. That's an extension of ourself rather than one centralized cloud that runs all the AI for the world. And then whoever is in the background can slight, you know, tilt sort of the distribution of our thinking in much more subtle ways that are almost untraceable. And to me, that leads to sort of top down authoritarian sort of thinking and can lead to sort of, you know, like just from a pure power seeking standpoint, whoever is in control of such system will convince people, you know, to converge onto sort of a collectivist mindset. And then while everyone's kind of fighting for scraps, they would, you know, they would, those in control would, would have a lot of power and wealth and, and, you know, they'd suppress free market, the free market of ideas, which is really the sort of error correction mechanism against tyranny. And all that would be gone. Right. And I think this is something that also resonates with Elon. It's why he started opening eyes open and now he started xai.
We can get into that.
But, you know, I, I really believe that we're trying to, we're solving the hardest part of the decentralization of AI, which is the densification of intelligence at the hardware layer.
[00:23:19] Speaker A: Yeah. So Elon Musk interacts with your base Beth Jesus account regularly. I, I don't know if yesterday was a record, but it was pretty.
And then Mark Andreessen, my neighbor here in Malibu is another account that regularly interacts with you. He's also a big fan of Atlas Shrugged.
He described this chilling meeting that he had that he and his colleagues had with the Biden administration's AI advisors, in which the latter shared a vision of having only a few big AI players in coordination with government controlling the future of the industry, one in which, you know, competing AI startups would be severely curtailed. And he left that meeting and immediately decided to endorse Donald Trump.
Do you share Andreessen's perspective on the dangers of this overly intrusive, centralized government approach when it comes to a flourishing AI environment?
[00:24:29] Speaker B: Yeah, absolutely.
You know, I would say that the previous administration's, you know, stance towards AI was kind of my personal key issue that drove me to leaning towards the current admin.
And I would say that, yeah, I mean, that is the key, one of the key issues that I've been fighting against with, with eac, it's essentially that we were seeing a sort of convergence of centralization of AI and some corporations wanting to sort of merge with government. And that was very risky to me because again, if the only people allowed to have AI were the government and a few corporations that are incorporated into it, then that leads to tyranny. It just does. And that's too much of a great opportunity for power seeking folks. And then, and then, you know, the solution to that is to keep AI power decentralized and for there to be the power of the individual, how much AI an individual can purchase and control to be sufficient. So that's there. There's a sort of deterrence, right, similar to the second Amendment. You know, there's no absolute monopoly on violence. There is a backup, you know, if, if people own their own weapons, they can defend themselves against tyranny. But I would say that, you know, violence in our age has been largely intellectualized or virtualized. And it's more how you can control people. And if you have a, an ability to predict people's behavior, you can steer them and you can engineer control signals or things, you tell them to steer them towards a certain outcome. And if individuals wouldn't have AI to help augment their cognition and augment their individual skepticism, and you just had augmented capabilities of centralized agencies and governments to subvert people, if those capabilities were jacked up to 11 and individual capability to resist cognitively were not, then that would lead to really bad outcomes. And I think Marc Andreessen was aware of this. I would say Elon was aware of this as well. And he instantly created a competitor to make sure that there's no one player that runs away with the whole pie.
And it seems like, I think hopefully that chapter of some of the AI companies trying to, to be crowned kings too early in the game, hopefully that's over and now it's just a free market, which honestly it's been great so far. Right. Essentially our thesis won.
There's not been only a few companies allowed to do AI. Many companies are allowed to do so again because of our efforts.
Elon, Mark, are kind of the, you know, those that spearheaded things in government. I've just been on more of the grassroots side.
But essentially we've had a free market competition and you have all sorts of AIs with different cultural biases.
And the technology has just been accelerating and getting cheaper for everyone to access.
And everybody benefits, right, Instead of only a few using the technology and centralizing it to consolidate their power.
[00:28:10] Speaker A: So yeah, so let's get back to EAC and effective acceleration.
To what extent did you see this as a reaction to the effective altruism movement that Sam Bankman Fried and others had started promoting?
[00:28:29] Speaker B: Yeah, so effective altruism is a sort of, it's a sort of funny movement. It's a, it's essentially hedonic utilitarians. They, they try to maximize hedonism and, and how good people feel or not. Well, not just people sometimes shrimp as well for some reason.
But essentially they have, they have this weird moral framework that yields really suboptimal like optima.
If you try to maximize hedonism, you can converge onto sort of wire heading. Right? So you're just like, you know, in a VR world or in a simulation and just in this near nirvana forever, you're not a productive member of society.
But overall I think like EA was starting to find ways to capture capital in any way, shape or form from the free market. So they're trying to deform the free market and then reallocate it to these sort of nonprofits again. Could be for mosquito nets or shrimp farms or weird stuff. But eventually they ended up all concentrating 95% plus, don't quote me on that figure. But it was very, very much most of their portfolio. They concentrated it on AI safety and sort of regulatory capture for AI, which is really just trying to put themselves in power. Their, their whole thing was essentially AI is dangerous. We need responsible adults in the room that control it. We're going to be the responsible adults, put us in charge. Right. And, and they would fund all these think tank like organizations that then would become arms of the, of the big labs to, to sort of continue this sort of fear mongering and, and spread, you know, what is called AI doomerism. Right. Which is I would say a spin out of EA effective altruists. And so we just saw this whole really well funded complex that was converging us onto a very bad outcome. And so we, we had to start the resistance to, to that movement. And that's originally how EAX started. It was antithetical to EA in some ways, but really that's just kind of, that was kind of the, just the first battle in general, right. Like the, this sort of, this sort of enemy, if you will, sort of creeps up as a different name all the time. But essentially if you know, IOK was there to fight for freedom, freedom of speech, individuality, you know, individual agency, celebrating individual and to make sure that there's no sort of massively oppressive government or sort of weird complex that restricts your ability to be productive and over centralizes power.
[00:31:30] Speaker A: Yeah. So how did you.
It's not just you, it's a crew of people and allies, some of whom have pseudonyms. I don't know if others have been outed as you were. So you might want to talk about how that felt at the time and how you decided to lean into it.
[00:31:52] Speaker B: Yeah, so I mean, you know, the, the movement gained quite a bit of, of influence, you know, in Silicon Valley and it was starting to in the shadows, get some influence in Washington.
You know, for example, I think, you know, we were opposing the, the previous executive order on AI that was gonna really kill open source AI.
But essentially for some reason or another, I got doxxed by reporters and really it was sort of, I think their goal was to.
Because I was, I was becoming a problem because I was kind of getting people to rally around this cause and that was an impediment to the over concentration of power.
I got doxxed and essentially the traditional media pylon, they would fabricate and try to associate me with all sorts of movements I'm not associated with or deform the message or try to clickbait, this man is building Skynet or something ridiculous.
That was a lot.
That was a big change in my life. Going from some scientists in a room that's advising all sorts of important people. But usually I'm kind of, you know, the asset in the background to front and center and you know, having my face plastered all over the timeline and getting, you know, 100 million views a month, I think, you know, that was a big change. But now it's been, it's been a year and a half since the doxing and I guess, you know, I've gotten used to it, but essentially I wanted to turn, turn, turn this, you know, attack against them. Right. If you have an anti fragile mindset, you can turn any sort of adversity into some upside. And so I used it as an opportunity to go on podcasts, spread the message further than it could have ever gone and also leverage it to acquire talent for my company and raise awareness about this important challenge we're pursuing with Xtropic. And that helped us hire some of the best and the fact that we've achieved these results such a short time scale since that, you know, our testament that, you know, if sometimes, yeah, getting more attention can, can be useful to get the best talent and to move the ball forward for civilization technologically.
[00:34:37] Speaker A: So here at the Atlas Society we are huge Fans of Andreessen's Techno Optimist manifesto. Curious whether you had any role in that or was it maybe just indirect or.
What can you tell us?
[00:34:53] Speaker B: Yeah, I mean, so Mark was a big fan of EAC from the beginning or from, I guess, pretty early on.
And essentially we were corresponding for quite a few months, very actively kind of fusing our views on the world.
And that was the time during which he was writing the manifesto. And I am one of the first cited influences there. So more in the background, but it's very much.
I would say it's very much a version of the manifesto that's maybe more.
Less sort of cosmic scale, as I tend to think as a physicist, and more kind of practical, immediate, more policy prescriptions.
And, you know, I fully endorse it. I consider myself a techno optimist. And so to me, it's, you know, what I. What I saw in terms of, you know, how ideological capture happens is that you have a meme or an idea that spreads and then it just keeps mutating and then it comes up under different names, and that's what we're fighting. And so the intent with the act was always to spread a sort of central meme or complex of ideas and for it to mutate into many several forks and then for those to have influence. And then it's much harder to take down something with several heads than just one. Even though they did try to take me down and take down the central branch, but by that time it had already forked. Right. And so it's sort of compartmentalizing memetic brand risk.
And so techno optimism is an example of something very akin to EOC that's more, maybe professional, less from some weird corner of the Internet, and can be marketed in Washington. And now the vice president literally said he's a technoptimist.
And so I think in terms of influence, it's been really great. And I would say that at least this administration, at least from what they say, have been very supportive of our sort of requests for policies that maintain American competitiveness and AI and maintain openness.
[00:37:24] Speaker A: So, speaking of on a cosmic scale, tell us, explain in layman terms what the Kardashev scale is and how we can climb it.
[00:37:34] Speaker B: Yeah, so. So the. The Kardashev scale is a set of milestones that get sort of are on the log scale, so they're exponentially hard.
And it's. It's milestones to track how big our civilization is and how much energy we are producing and. Or consuming.
And, you know, there's, there's three big markers on that scale, and then you can interpolate on the log scale, a sort of linear, linear interpolation. And Kardashev type one is essentially we, we produce as much energy and leverage as much energy as there is impeding upon the Earth from the sun.
And so that is a certain amount of Watts that is pretty massive.
You know, I think we're at 1% of, I think barely 1% of the Kardashev scale, not on the log scale, just in terms of.
For K1, Kardashev type 1, Kardashev type 2 would be we leverage all the energy or the equivalent of all the energy that is being radiated by the sun. Right. Again, it's not energy, but wattage, so power.
And then type 3 would be all the entire galaxy. Right. And so essentially I saw, you know, in my studies of thermodynamics and more precisely stochastic thermodynamics, which is the physics underlying life itself, I saw that actually there is a sort of Darwinian like natural selection over all of matter, I call it thermodynamic selection that occurs on. And the sort of fitness function is whether or not the system has dissipated more heat, which is really weird.
But essentially.
And there's also, it tells us that the odds that, you know, you fluctuate back to zero, like the system completely dies, they get much smaller as the system gets bigger. And that makes sense. And so to me, I saw that as, oh, this is fundamental to life. If we, if we get bigger as a civilization as measured by thermodynamics. So what is our, our consumption of energy that will ensure a lower likelihood of, of the destruction of this phase of matter or the extension of civilization? And so I felt like we had a responsibility to scale up the Kardashev scale. And to me that's the key issue and the one metric we should strive to, to, to, to improve for our civilization because unfortunately, GDP and, and, and, and, and capital, it's hard to track, right. And it's imperfect. And you know, money sometimes gets inflated away or printed. It's not an objective.
[00:40:33] Speaker A: Metric.
[00:40:33] Speaker B: Whereas energy, you can't joule is a joule wherever you go in the universe or, and so is a watt. And so to me that was just a, a better thing to optimize than, you know, heat ons, right. Or hedonism, which is completely subjective and, and leads to weird optima. And so, you know, now, I guess, I mean, this has been kind of Elon's whole thing, But I guess we kind of merged memetic complexes and now is very focused on climbing the Kardashev scale as the key issue for me.
What will accelerate our ascent is creating a way to convert energy into value as efficiently as possible. So you get more value per unit of energy and that's going to increase the demand for energy and thus create a sort of positive pressure to scale up the scale of civilization. And so for me, creating this technology that's the most energy efficient way to convert energy into intelligence, sort of steam engine for intelligence, operating at the limits of thermodynamic efficiency, to me was the way to create that sort of pressure to climb up, but also creating the social movement to raise awareness that this is the key issue we should all be aligned towards.
And it's naturally something that free markets optimize for. Right? Free markets select for organizations that utilize their resources in a way that maximizes growth. To me, it's literally a fundamental algorithm that leads to self assembly of complex systems that have emergent properties that are optimal. So our bodies are kind of a free market of cells and they all kind of have some coupling with one another, they have some interchange chemically. Right. They have a sort of chemical and energetic economy. But then the emergent property is this functioning organism that is you.
And I view, I view capitalism itself as an AI like or physics like algorithm that is far more efficient at capital allocation and growth than any sort of top down prescription or top down control. Imagine a human trying to design every cell in your body. We wouldn't achieve what we wouldn't be able to do, so we wouldn't be able to design ourselves. And so it's a lesson in sort of humility. I don't think any one committee can design a whole complex system, but a complex system can design itself from self assembly. And it does so by constantly competing and having freedom to explore and optimizing for growth.
[00:43:35] Speaker A: I'm going to get a rebellion on my hands if I don't get to some of our audience questions. So let's try to take a few.
My modern gal, always great to see you is asking Gil, what are your thoughts on the risk of disinformation misinformation online today? Do you think AI opens up new risks that we haven't yet accounted for like AI generated audio or video?
[00:44:00] Speaker B: Yeah, I would say, I mean AI generated audio and video is, is already here.
I would say that again.
You know, if, if you have a sort of symmetry between the side generating and the side discerning in terms of capabilities, you know, if you had your own AI assistant that you trust, that you, and you own it, you control it, that tells you whether something is real or fake and can detect it and augments your own cognition instead of like giving, you know, putting more cognitive load on you, then that's fine. You just want symmetry in terms of capabilities. Right.
I would say trying to suppress these capabilities is not the way forward. I think there's a lot of upside left on the table and really everything's always an acceleration, it's a race in terms of capabilities. And again, as long as it's not just the government that has access to these tools and then can generate propaganda and then you can't even, it's so good, you can't even discern if it's real because you don't have access to AI you control, that would be really bad. That would be the main risk to me. But if peer to peer, we have the ability to generate and discern just like humans.
A smart human can tell you something that's completely false, that updates your world model and you either have the cognitive security or you have, you augment your cognition with, with a group of people you can talk to. And it's like is this, is this real? Is this, is this correct?
You know, there's sort of peer to peer ways to, to validate information. I think fusing like if everybody has access to more intelligence then we'll be able to sort of collaboratively filter things either between US and our AIs or collections of subgroups of AIs for people you trust and you feel aligned with their values. Right. And I think that's the future, that's how it's been since the dawn of civilization. And I think, you know, there's been an increase in capabilities, but again for generation, but so there's also increased capabilities for discernment. Right. Of truth.
[00:46:20] Speaker A: Okay. Anne M Asks, when do you estimate launching or shipping the first commercial version of your product?
[00:46:27] Speaker B: Yeah, so this chip that I just showed, we're packaging into development kit for sort of enterprise and innovative startups and maybe a handful of individuals. It is a small batch. It is just our test chip that we're aiming by the end of summer to put in the hands of the first customers, which is very exciting in terms of timelines. To go from concept to prototype that's delivered to customers on desks in three years is pretty great.
Next year is when our million probabilistic bit chip is launched and should be widely accessible.
And that chip is a proper product, not just an experimental development kit for those trying to get ahead.
But depending on the org already starting to experiment with thermodynamic computing is sort of essential because the disruption is coming next year and they need to get ready in terms of how this affects their algorithmic stack. Whether if you're in finance or defense, obviously it's mission critical for you to have the most cutting edge capabilities. And then if you're in general the AI, obviously there's a free market competition there that's heating up and so any advantage you can get you should take.
And so yeah, if anybody wants to use the dev kit early we have a sign up form and we're going to put out some software more in the open first and then the hardware is for, you know, because we just didn't do a very large run of the these first chips. You know, it's between 200 to 1,000 early customers but you can apply on the website Extrapic AI and there's a signup form for those interested.
[00:48:19] Speaker A: Great, yeah. So I have another kind of comment here from King Fisher. He says he's bullish for AI but he thinks people are overhyping what AI is currently capable, all capable of. Too many think it's the be all and end all. How would you respond to that?
[00:48:40] Speaker B: Yeah, I would say the current AI capabilities are not, are not the end game. And I think, you know, I think calling a human like AI or human level AI AGI is very short sighted. It's kind of, I compare it to sort of geocentrism. Right. And but in the space of intelligence, in the space of intelligence, I think human intelligence is a mile marker on our ability to understand complexity of the world and predict it and steer it.
I worked on AI for physics which is much harder than emulating a human. So understanding biology matter. I was working on generating quantum matter and superconductors and esoteric materials using AI.
I think it's going to keep going.
I think, you know, current systems are not human level and they can emulate what a human would respond but they don't have the agency yet and they don't have the ability to have curiosity to seek out new information to decide whether they explore or exploit.
And they don't have agency yet. And so right now we just have sort of raw intelligence, we have raw compression ability to predict the next token in a sequence, but we don't have that sort of agency. And so right now whoever leverages AI and becomes sort of the source of agency for a fleet of AIs can create products that generate a lot of revenue and really impact the world in a really positive way and encourage people to do that.
And everybody has the ability to do that. Even if you literally don't know how to code, you can just ask the AI to code for you nowadays. And so really human agency plus artificial intelligence right now is a sort of golden period. Eventually will we figure out agency? Yeah, probably. For AIs I think we're going to need way more compute than we have right now. And that's what I'm trying to bring forth. But really my goal with this form of computing is not an anthropocentric goal. I'm not just obsessed with trying to automate humans. That's not my goal. My goal is to understand the physical world in order to increase our ability to expand to the stars.
I'm really targeting can we use AI to understand our biology and control it, to simulate it and eventually to help us with problems of material science and all sorts of scientific breakthroughs which I think would be beyond the reach of any human that's ever lived in terms of cognitive difficulty.
And so I think having the symbiosis between AI and humanity is going to be really important. But I think those that are closed minded to leveraging AI are going to be left behind and those that are open minded and leverage it are going to do very well.
That's why I feel responsible to spread this message that you should try to embrace AI because there's a lot of upside in it for you and your descendants.
[00:52:05] Speaker A: We are super excited to have you as our keynote next week, actually a week from tomorrow at Galt's Gulch in Austin. I know you've been busy changing the world and as we're hearing right now, getting your product ready to ship by the end of the summer.
Have you had a chance to read any of Ayn Rand's literature? Because a lot of what you're describing seems pretty in line with some of her ideas.
[00:52:32] Speaker B: Yeah, I really should finish what I started there. I think I started some of the audiobooks but haven't quite had time to, to finish there. But there's something as well, I think that's very validating about like coming to, you know, I came at it from first principles, from my own, my own path of course, like through potentially culture. And obviously I've been exposed directly or indirectly to, in Rand's ideas, maybe that seeded some of the ideas that then became yak. It's hard to trace.
Again, it goes Back to what I was talking about with memetic complexes.
But you know, I think in my case the fact that I came and converged onto similar principles from my own journey is really validating for this, for, you know, objectivism and the set of principles. And again, for me it's, it's, it came from a journey trying to understand the physics of the world, the physics of complex systems, the physics of capitalism, the physics of society at large. And, and to me this, this seems, you know, frankly optimal. And so, you know, I, I guess like there's two, there's kind of two schools of thought, let's say in academia, right. Like one is you do literature search first and then, and then you feel like everything's been, been done and then you, you maybe you get dissuaded from, from sort of exploring a set of ideas. Another way is you, you go forth and you build the idea out and then you see after the fact, if there's existing literature that's strongly overlapping, I think there's something about our, just this sort of creative spurts that like, if you feel like it's an original idea, then you're going to be more excited. I would say there's probably a lot of overlap, I mean Rand, of what I've been proponent of.
But yeah, I definitely need to connect the dots, probably like looking backwards.
But I will, I will work on.
[00:54:40] Speaker A: Reading definitely or even just Anthem, because that was all about.
She had a very unique vision. She actually published Anthem, which is her dystopian post apocalyptic novella, 12 years before George Orwell's 1984.
While a lot of these dystopian writers of the time saw that this totalitarian future was going to be technologically advanced, in Rand's telling, it actually became more medieval, more primitive because they did not understand the value of individualism and freedom. And they started with, as you experienced in Quebec, control of the language. So, so the word I has been abolished and lost and that's to control people's thoughts.
Well, we're also going to be doing mentoring roundtables at the conference. I'm not sure how much time you'll have with us, but maybe if you could just tell us now what kind of advice you would have for young people who want to live a life of achievement and productivity and meaning with all of the changes that are coming at us as a society. Warp speed.
[00:55:59] Speaker B: Yeah, I would say, you know, you can, you can, you can learn from people that have done things, but no one is a central authority for everything. And you should you know, pick and choose advice from. From several parties, but you should ideally derive your own worldviews from scratch, derive your own set of values. Obviously, you can take inspiration from objectivism and from what we've been saying, but you should come at. You should converge onto a set of values that you convince yourself are your set of values from scratch, ideally, because that's very robust to other people trying to influence you. Whereas I think those that converge onto collectivism tend to defer their cognition to the group.
And what they don't realize is that they're giving up a lot of power and control and agency and also cognitive security by doing so. And if you're just believing what is prescribed to you to believe, then you will not. You will likely not have a great life. And.
Or if you do, you maybe you'll be.
You don't know the life you're missing out on if you do so.
And I would say that. Sort of not taking no for an answer.
You know, we have this saying on Twitter. You can just do things, and it's really true.
You know, some people will tell you you can't do it, but you could be like, really? Why not, though? Right? And then. And then you can keep going, right? I think for. For me, it was, you know, I was in Quebec, I was like, hey, I want to be a theoretical physicist in the best schools in the US and then after that, I wanted to be a quantum computer scientist. And the physicists were telling me, what do you mean? What are you doing? And then when I was a quantum computer scientist, I said, I want to start a new paradigm of computing. They're like, you're crazy. You have a great thing going on. And so people will tell you, you know, whenever you have really high agency, they'll. They'll tell you crazy, or, you're taking too much risk. But that's usually the direction you want to go, because, you know, people that want to keep you in their. Their priors or keep you constrained will usually indicate the gradient of lower risk, and you should. You should take more risk. I think the highest risk is to take no risk. I mean, this is common advice, but, you know, I really try to live by that. Anyways.
Yeah, thanks so much for.
[00:58:36] Speaker A: Yeah, absolutely. It dovetails very much with the kind of ethos here at the Atlas Society of our open objectivist approach, in which I remind people no one can think for you, not even Ayn Rand. So thank you. I'll see you next week.
Very much looking forward to.
[00:58:55] Speaker B: Looking forward to being in Austin being.
[00:58:57] Speaker A: Back in person again. And thanks everyone, for joining us today. Be sure to join us next week when we will be in Austin. I'm going to be interviewing author Jimmy Soni to talk about his book, the Founders the story of PayPal and the entrepreneurs who Shaped Silicon Valley. We'll see you then.