The Strange Attractor

Dissecting Media Bias and AI's Role in Research and Innovation with Ben Field and Juistin Beaconsfield from Raava | #5

Co-Labs Australia Season 1 Episode 5

Ready to delve into the world of AI startups, media bias, and the future of research and education? This episode promises an enlightening conversation with Ben and Justin, a.k.a 'the AI boys' the brains (and potential boy band) behind Rava, an emerging AI startup and impact member at CoLabs Australia. We explore their journey into this fascinating field and the decision that led them to leave their academic pursuits to focus on this promising new frontier at the absolute cutting edge of applied AI. We also explore AI's potential use as a tool for good, the ethical implications, and the unique opportunities available for pioneers in the AI space.

Ever considered how media bias and algorithms can limit our exposure to diverse perspectives? The conversation takes an interesting turn when we discuss this overlooked aspect of our digital lives. We dissect how custom news feeds can lead to polarisation, and the challenges inherent in a system driven by advertising revenue. From Twitter's community notes to other tools designed to combat bias, we dig deep into the initial conditions of these systems and the urgency of working towards unity rather than division and explore how Raava are working on a media bias plugin to help folks make sense of the information they consume.

Lastly, we dive into the potential impact of technology on the world, the importance of wide boundary thinking in its creation, and touch briefly on the concept of complex adaptive systems. We traverse a range of topics, from using technology to solve challenges in different bioregions, to the potential of an 'AI tech stack platform' that could assess the reliability of scientific data. The discussion is jam-packed with insights that will leave you with plenty to ponder. So, join us as we explore the future together, guided by the insights of our inquisitive guests, Ben and Justin. Let's expand our horizons, shall we?

Keep up to date with Raava:
- Website
- LinkedIn

Still Curious? Check out what we're up to:

Or sign up for our newsletter to keep in the loop.

This experimental and emergent podcast will continue adapting and evolving in response to our ever-changing environment and the community we support. If there are any topics you'd like us to cover, folks you'd like us to bring onto the show, or live events you feel would benefit the ecosystem, drop us a line at hello@colabs.com.au.

We're working on and supporting a range of community-led, impact-oriented initiatives spanning conservation, bioremediation, synthetic biology, biomaterials, and systems innovation.

If you have an idea that has the potential to support the thriving of people and the planet, get in contact! We'd love to help you bring your bio-led idea to life.

Otherwise, join our online community of innovators and change-makers via this link.




Samuel Wines:

Hello and welcome to another edition of the Strange Attractor. This time we sat down with Ben and Justin from raava, so Ben Field and Justin Beaconsfield. They were our entrepreneurs in residence and have since set up an AI startup and they operate from our space. They are a very fun, dynamic duo, informally known as the 'AI boys' at the lab space, and, yeah, just thought we'd sit down and have a chat with them about some of the projects they're working on, because obviously the AI space is really really fascinating right now with all the potential good and potential harm that can be going on in this space.

Samuel Wines:

It's interesting to have two people at the forefront of that who are thinking deeply about these questions, about how we can ensure that we can have technology that helps humanity get back in alignment with the natural patterns and principles of the biosphere that we inhabit. Yeah, good little conversation. Great that Andrew could join us halfway through and without further ado. I hope you enjoy this conversation with Ben and Justin. Ben, Justin, welcome and thank you for joining us for another episode of the strange attractor.

Ben Field:

Thanks for having us. Thanks.

Samuel Wines:

Sam. Yeah, no, it's a pleasure to have you here. I feel like there's such good banter that we have all the time that I wish we could just somehow have had that recorded. I mean, we could probably ask Apple. I'm sure they record everything but it would have been great to have that in a conversation somewhere, but I reckon we're going to be able to make it make the origin in dialogue. So do you want to give us a little bit of an intro, like to who you are and what you guys do?

Justin Beaconsfield:

Yeah, I'm Justin. I, what about me? I just finished up. Well, I didn't quite finish doing my masters in data science. You didn't finish? No, I had one semester ago and then. I know, once I messed up but I left to start start the business with Ben.

Samuel Wines:

I was going to.

Justin Beaconsfield:

also, I was actually originally going to leave to become a quantitative trader and then Ben and I were getting super into AI. I was already interested in AI a lot through uni, more like the actual like building AI, but then like seeing the emergence of the tools and like realizing those as whole field of like applying those AI tools, got got really interested in that, chatted with Ben a heap about that was kind of like traveling for the first seven months of the year and then was going to come by home and start job as a quantrator. And then decided to blow that up to start this business with Ben rather, which I'm still speaking about in a second yeah, for sure.

Justin Beaconsfield:

Yeah, I'll hand over to Ben, maybe. Yeah, that's me.

Ben Field:

Yeah, I mean I was studying bio-med engineering and also kind of bummed out of that. Not bummed out, but I'm stopped that to do this with Justin, I'm a little. I've still got like a year and a half left, so hopefully don't ever get that year and a half done actually Similar to Justin, honestly. I think we were just having a lot of really interesting conversations and I think we were both kind of separately thinking like I mean I was, I was really interested and still am really interested in biology and synthetic biology and that's how I know you, sam, obviously. But I think, justin, I both had this kind of realization of like there's an inflection point going on here and there's like this is the point in time where you can have, you can build up a lot of leverage just by being early and just by virtue of like this is a field.

Ben Field:

Ai is an old field, but like applied AI and like AI engineering is like a very quickly evolving field that there doesn't really exist like much precedent for, like the knowledge is evolving rather than like there isn't really a syllabus for it yet, and so there's a lot of value in just like digging in and learning as much as you can.

Ben Field:

I think that's something, justin, I really enjoyed doing is just like researching and getting into rabbit holes and like I'd kind of got the building bug from like some software we'd built prior to that. And, yeah, I think like having someone else also came to just dig in and just get up to speed in the field and really get to like the frontier was a big like motivating factor to like kind of drop what I was doing and what I was interested in to do something that like felt more pressing from like a career perspective and something that felt like this is both extremely interesting but also like potentially like highly valuable as a form of like leverage in my ability to like do stuff myself and do cool shit, yeah, and like on the back of that.

Justin Beaconsfield:

Like that I felt like a cool analogy for like where we see the application of AI is like you had like the invention of like computers and whatnot. That was developing over a while and computers started getting really good. And then, like from there there was like the field of computer science that merged, which all of a sudden had like all these people studying not how to build computers and not necessarily understanding how computers worked under the hood. I mean, they usually want to understand it to like some extent, but like the you know, the field was about like how can you create like good algorithms or like systems and whatnot, and like leverage the technology?

Justin Beaconsfield:

And we kind of maybe see AI heading to a similar point where it's like, okay, now you don't necessarily need to understand, you want to understand a bit of the computer science and math and statistics that underlies the AI itself. But what if your specialty was applying the AI? And that was kind of like the thinking for us and it's like, oh, because this AI as a tool that can be applied is really only emerging now. If we get to work right now with, the field has just started, so we're just like instantly going to be right up there with experts, because you can't be a 10 year expert in this field. That existed, for you know, right now it's a bit more than like 12 months, you could argue.

Ben Field:

I mean there are people who've had starts like this. It's mostly compared like being good at this kind of new AI engineering thing is, like it seems like mostly right now, a mix of being good at software engineering, being good at like traditional machine learning and then being good at like just being quite creative and just like making stuff.

Ben Field:

Yeah, but the actual AI found that we've learned the other skills fairly quickly and we are like getting there and it's just fun, like allows you to like. I think I work well. I learn very quickly when I'm enjoying it and I don't learn quickly when I'm like it feels arbitrary.

Samuel Wines:

So if I was going to frame that as kind of like a natural evolution of tech, right is that you're kind of standing on the shoulders of giants you've like great. These people have opened this entirely new field. This is the beginning of infinity.

Samuel Wines:

Yeah how can we build and develop things and take this, that next step, and how can we apply what was, you know, maybe primary research or just figuring out how things can work from like a serious play perspective. And then you're sort of going you know, now that we've had that convergence, that divergence, let's converge on how can we apply this to have like meaningful impact in the world.

Ben Field:

Yeah, like basically up until now, I think if you were applying AI and business, you are either a business that had an enormous amount of data and an enormous amount of like technical expertise in house, or you were kind of researching stuff and like we've now gotten to the point where things that would have taken years of research and some serious expertise and a group of PhDs now can kind of be done via like an API call, and that opens up an enormous amount of surface area for applications and it's also like kind of learning how to use these tools effectively, not even as a programmer, but as just like an individual. For chat, gbt subscription is going to become like a metaskill, I think, where it accelerates your ability to execute in whatever else you're interested in. So, yeah, we're very interested in, I think, helping other people as well learn how to apply this metaskill to execute on whatever else they want to.

Justin Beaconsfield:

A lot of like.

Justin Beaconsfield:

What makes it really powerful is not just that it allows you to do things you could previously a lot of those things.

Justin Beaconsfield:

If you, as Ben kind of alluded to, if you put like a really experienced team of ML engineers on the task, you could achieve a lot of the things that, like a chat, gpt can achieve. But what's kind of interesting is now, instead of putting in months and of development time with a bunch of experts, people can very simply achieve tasks, and what that winds up doing is it goes Okay, this thing that previously wasn't worth pursuing because maybe it wasn't going to add enough value for the resources you're going to put into it, it now is worth pursuing and if that part, if that is something that's part of a bigger system, adds a heap of value, then it just like unlocks all sorts of capabilities with like the fact that, oh, now that they required input of resources to get some level of output is like way less than before. So what can we do now that it costs less in terms of time, expertise and money to be able to achieve the same things and the same results?

Samuel Wines:

Yeah, so I know you've been working on a couple of projects whilst you're in the space and we've bounced so many ideas off each other, but I thought, maybe, speaking to how you've been going, okay cool. If that's the case, how could we apply that? You know, maybe with the media bias, sort of?

Justin Beaconsfield:

thing that you've built.

Justin Beaconsfield:

Yeah, that was the first project, yeah, the first.

Justin Beaconsfield:

I think a good example of like maybe, where some of like these large language models are really useful, is like the first project pretty much we started looking at, which we're now starting to ramp up again a little, which was essentially just the project that would take in articles and read the article and then start assessing them for all sorts of bias.

Justin Beaconsfield:

So it would start to look at things like how emotive the article is, if it's left or right leaning, if it's anti or pro establishment, which is something we kind of like took from max tag market, mit with improved the news really cool kind of inspiration for what we were doing. But the idea was yet like, okay, now any article I'm reading, I could hypothetically just have like a large language model. It's a bit more objective, read that and just alert me to things that I might miss. And just, you know, media bias is obviously something that is can be so subconscious, like in the way it kind of targets us and just having that drawn to your attention kind of just like tear apart so many things. But just using large language models to really simply pass in all this text import can just have really cool.

Ben Field:

I also think that idea was kind of spawned from like we were having a conversation where it's very obvious that these tools are going to be used to generate disinformation at like quite a large scale, and it's very obvious that these tools are going to attach to all these ideas of they already are already are well, something I learned interesting that I learned recently that I found really interesting was I didn't know that bot farms, up until this point, were actually have been human beings.

Ben Field:

Oh so bot farms at least in the context of like Russian, russian involvement in the US election and stuff like that, like these, these, both farms are actually a lot of the time they are farms of human beings sitting in a, in a room in perhaps like a computer, like a ship in international waters or some some room in in some country with pretty lax, like some focus on this sort of stuff new age Gulag.

Ben Field:

Yeah, it is, and like they they'll have, like each person will have, will matter computer and may or man, like 20 or 30 phones and they'll just be in charge of like.

Ben Field:

I have like 100 Twitter accounts and I'm just going to go through and like, comment a bunch of shit based on my objective, but it's actually been human beings like there are. There are bots in the sense of things that just go in like algorithmically, like stuff or like follow people, but then like bots that are actually making comments. Very like, for instance, if you're trying to inflame people on the left back with like kind of inflammatory, like right wing comments on Facebook, like those comments were being written by people with an objective, but now, with a large language model, something that you needed to pay people in the third world to do, you can do at a much greater scale, probably a high level of sophistication, and that's really scary. Like. I think these tools are going to objectively be used for like pretty horrific things at a societal scale, and then just now we're having this conversation. We're like it's very obvious that these tools will be used for that. How can these tools be also used to combat that?

Samuel Wines:

like preemptively, how can we ensure that there is stuff out there to assist and help people? Because, like you see, you see this all the time. Right, like one of the main things that happens with everyone now is that, because of the algorithms and the custom news feeds you never get fed multiple streams, your diet of information.

Samuel Wines:

It's like you only eat corn chips, you only eat ice cream. Like you don't get a diverse, nutritious range of foods coming into your diet. And like I find this so fascinating that you can. You either agree with someone or they hate you. Like there's no middle ground of like well, maybe some of that, what you just said, is actually true and factual and this part of it actually disagree with you can't have the nuance. It's got to be polarized and black or white.

Justin Beaconsfield:

I mean like part of that's like the whole like tick tock generation, like really like low attention span thing, because also how quickly the algorithm.

Ben Field:

Again, an example of a I being used for ill is like how quickly the algorithm, how good the algorithm is at recognizing what evokes an emotional response from you and then like optimizing your feed based off that. Like isn't there, there's like something where you can get to within 10 minutes.

Justin Beaconsfield:

You can get to something that you can't have your distraction on to Music, physics, music, music, music like an OWN, thank you. Thank you for your insight on getting to me and Nancy Conitor. Yeah, so I fear our very well. Thank you so much for your great insight and for your kind of things that only take three seconds to read or seven seconds to watch or something like that, and it's like when you have that, then you need to put in all the information that you can in that three seconds or seven seconds or whatever it is so like. Inherently, what you do is you are reductive and you reduce, and that strips away any chance you have at nuance, because if I-, it destroys context.

Justin Beaconsfield:

Yeah, exactly. All you can do is just pretty much take one side of any given thing, put one spin on it, and if you want it to be at all interesting, you can't give a neutral stance, because you have three seconds to explain what you're explaining. So you just do something inflammatory and it just like works totally hand in hand with that and it polarizes us.

Samuel Wines:

So we need this media bias bar up and running.

Ben Field:

Yeah, and like it was interesting thinking about the design of that. I mean, that was at the moment. That's just like a little demo that we have running on our computer that works reasonably well.

Samuel Wines:

I was pleasantly surprised yeah it works pretty well.

Ben Field:

but I think we're gonna now work with some like alongside you and alongside some other people to figure out how we can take this from demo into something real.

Ben Field:

But the design is how you think about. Like, the design and the distribution of these things is as important as the capacity of the technology. Like, I think the best example of combating media bias that's in the wild is honestly Twitter's community notes. I think, if we're talking about context, like the fact that Twitter community notes expands the context scope of whatever piece of media that you're looking at on Twitter, is really valuable because I often find that it serves to diffuse the initial impact of that, whatever the skewed tweet was. But the reason that's powerful is because it's operating at the scale of Twitter. Like, without media bias thing, you have to convince people to download a Chrome extension and then use the Chrome extension, whereas, like, you don't install Twitter community notes. If you're on Twitter, you get exposed to those notes and I think, like, combating bias is much more, in a sense, about distribution and about like tweaking the algorithms of these massively, these massively reached, like these platforms of enormous reach, than it is about figuring out tools. So it's an interesting challenge.

Samuel Wines:

It's a tough one, right, cause it's like when you have perverse incentives baked into the system, where you have an entirely based off an advertising revenue model and eyeballs on screen and racing to the bottom of the limbic brainstem is the best way to like. Racing to the bottom of the brainstem or limbic hijacking is the best way to, I guess, evoke that feeling and get people either enraged, angry or whatever sort of heightened state of peak emotion and then hitting them with something like an ad or something that targets them when they're in a state of sensitivity. Yeah, I just find, yeah, like it's so hard to compete with that unless you just actively look at the structures of the systems and acknowledge that someone designed that Like you could literally like it's like the worst kept secret. It's like these have all been designed to an extent. You just need to like have a look at those initial conditions and be like maybe don't do that, but I guess, yeah, the issue is that convincing the stakeholders to be able to do that.

Ben Field:

But maybe, like, we just need to build tools for informational guerrilla warfare. Like, every time you step onto a social media platform, you're being exposed to an informational battlefield with it, whether you know it or not. And it is an interesting idea to kind of think about. Like what is like an informational freedom fighter? Look like. Like what are the guerrilla warfare tactics? Look like on the online online scape.

Justin Beaconsfield:

I mean, there's so much to tackle as well. Like community notes is great on Twitter, but if you had to, then say just to do more or like more to polarize us or to unite us, you'd say almost certainly to polarize us. And it's like, well, okay, like community notes exists, why isn't that solving all the issue? And it's like, well, the thing that's like solving is like blatant misinformation. It's like if that's a lie, it's going to point out there's a lie, but the things that polarize us aren't probably most predominantly.

Samuel Wines:

Sometimes they're truths. Yeah, exactly, they're not.

Justin Beaconsfield:

It's just half truth, or it's a disinformation, or it's it doesn't, but I think, the most insidious ones and probably the most common ones. It's not even that it's a disinformation. It's not even that it's a half truth.

Justin Beaconsfield:

It's just that it's presented in a way that just frames you it just positions you to kind of like see the issue in one way, or like, yeah, just through the use of emotive language or it's not even just emotive language, like I wish it could be reduced to things as simple as emotive language. But it's just really clever framing. It's like just telling the facts in a certain order, in a certain way. That just paints a picture, and it's a really complex thing. I'm not exactly sure how to. I wish there was like an easy way to articulate exactly what's going on. But I think we all sense it.

Samuel Wines:

But this is the thing again. Going back to attractor points, right, we all know that that's what that wins. So that is something that, in an arms race or just in this sort of context, is upvoted, so to speak. And so if we know that that's what's works, then everyone ends up reverse engineering what works from the account that do it, that do it the best, and then suddenly now everyone's doing that same thing and it's just a race to the bottom sort of thing. It's like a positive, reinforcing feedback loop of shitfuckery.

Justin Beaconsfield:

Yeah, I mean yeah and yeah. The tricky thing is that, as much as it's hard for us to pick up these algorithms, the AI that's essentially running under the hood for all these social media platforms is able to pick up this nuance really well and just feed it to us and it just knows that things we're just going to react best to things that just sit in the confirmation bias realm and just affirm to us what we already believe.

Justin Beaconsfield:

And it's just like and it's really crazy, like I even noticed I like made a new Twitter account and part of that was because I had an old Twitter account and I noticed that really quickly I just started getting like a bunch of like well, I found to be quite intense like right-leaning posts, like really quickly when I started reusing the account and I was like ugh, what's up?

Samuel Wines:

My algorithm is wrong. I'm just making a new account. Was this Twitter or X X? Ok, just had to clarify. So, it's post Elon takeover yeah.

Justin Beaconsfield:

Yeah, I mean I wouldn't surprise me either way. I think it was like I mean I'm not sure, it's like necessarily just an Elon thing.

Justin Beaconsfield:

I also just think it was like the realm I was looking at, like I was looking at a lot of tech things and whatever, and I just think there was just this pocket that I kind of wound up in. Or like I think I looked at a few too many, even just like looked at a few too many Elon Musk posts, because there is that pocket of like fans and I just saw a couple of posts I found interesting and then in half a second it was like, oh well, my feed is just like it's wrong.

Samuel Wines:

This is literally what we were talking about before how we've been trying to think of how you could do this like a mind map, neural network of how things connect to one another. Essentially, what you're saying is, rather than leaving that shit under the hood, where no one actually knows what's going on and how these things interconnect and how they relate to content, like we were having a chat about, how could you actually use what's been used against us currently for bad for impact-oriented innovation that can support a more resilient, regenerative future? Did you want to like? I feel like this is a really interesting place where you could.

Justin Beaconsfield:

I hadn't considered that Wave it in. They're like yeah, like the idea of like, if you could hypothetically see on like a graph where those algorithms were like pulling ideas together. And I mean because, like all these social media networks, they run all the algorithms in like a graph network form. They store all the information graphically nodes connect to other nodes and seeing relationships and just all about like relational data. And it's like it'd be really interesting if there was transparency.

Samuel Wines:

I understand.

Justin Beaconsfield:

The thing is transparency.

Samuel Wines:

It's all open source but it's like some visualization. I mean, the Twitter algorithm is open source.

Justin Beaconsfield:

It's open source, but we can't visualize it.

Samuel Wines:

It's like it's all representing numbers. Show me how clicking on this video ends up with neo-Nazis in Slovenia, or something like that. Yeah, it's yeah. And just playing and riffing on this thing like I'd really love to double click on what we were chatting about before, like what, how do you even frame that project?

Justin Beaconsfield:

You can probably speak to it well, Ben.

Ben Field:

So are we talking? We're still in the context of media bias, though Maybe just like highlight what the graph.

Samuel Wines:

The graph network ideas, and then we can like circle back and now we're going, like we remember how we're talking about like an ecosystem tech stack of different AIs, so the LLM, and then that's integrating with something else, and then, at different scales, you've got different relationalities.

Ben Field:

So like something that I and Justin have thought about a lot is the fact that there is an enormous amount of value of novel information that can be found through just synthesizing and connecting existing ideas that are potentially siloed away Like. Something where I'm a big believer in is the value of being into disciplinary, and why is that? And I know that you're very big on that as well, sam.

Samuel Wines:

Transdisciplinary from our point of view, but no biggie.

Ben Field:

Transdisciplinary. But like why is being transdisciplinary Valuable? Is because you find novel connections and new ideas at the margins, not at the center.

Samuel Wines:

Yeah, it's the overlapping of two ecosystems, which is called an ecotone, is where you get the most biodiversity, and that's kind of where the best ideas also come from.

Ben Field:

And so we see these like polymathic people who are incredible at coming up with extremely creative insight because they are able to think like a designer, but they're also able to think like an engineer and they're able to think like an artist and they can take ideas from all of these different spots and put them together. And it's like well, how can we use the new tools that have come available to help expedite that? And I think something that's really interesting is like we've got all this scientific knowledge, but a lot of it is siloed away in different disciplines and just by the nature of like publications and different disciplines at university and people specialize like you have to specialize, but specialists rarely communicate with each other. And particularly like the scientific literature is like there is this idea of like disjoint sets of literature that very rarely interact with each other. But if you were able to link insight from the two disjoint sets, then you can potentially come up with something very available.

Ben Field:

And the idea of this is like there was this guy it's called Swanson linking which he came up with a novel way of treating headaches because he was reading about how fish oil impacts magnesium levels somewhere else and because he could connect A and B. He knew that B connects to C and A connects to B. Then he could connect A to C. But that was only because he was able to look at these like disjoint sets of literature and synthesize.

Samuel Wines:

Proper systems thinking.

Ben Field:

And so we were having conversation today about like LLMs aren't good at creative insight. Vertically they are not more intelligent than a human being. They're like vertically they are less creative, less good at complex reasoning than even the average human being. But you can horizontally scale them so that they are able to consume and pass huge amounts of information. It's like what if you set LLMs to just extract insight from the entire body of the scientific literature and to extract identities and nodes of these graphs and connections between the nodes, and then create a different representation of that scientific literature than just like a 500 gigabyte repository of text. Maybe we could have it in a database form and you can query that database, or maybe you have it in some new way that allows expert humans to then kind of see those connections without having to read the different articles.

Samuel Wines:

Or you could then use prompts to an LLM like GPT off the back of that and going. Can you find any novel ways in which, like for example, you could ask a question and be like there's got to be something in like complexity science that weaves in with biology and physics so that we can try and find a way to tap into, maybe, quantum biology, to then figure out how to harness photosynthesis?

Samuel Wines:

and then use that as a way to do. And then suddenly what you're saying is that you can kind of weave in insights from like the, I guess, biological, cognitive, social or ecological lenses of looking at life and you can sort of amalgamate them together to come up with something that's qualitatively different than if it was stuck within any of the individual disciplines.

Ben Field:

And the thing is, it'll still be expert humans who are making that final leap, I think. But having a new way of visualizing that information or a new way of traversing that informational tree is going to hopefully be really valuable.

Justin Beaconsfield:

Yeah, and so that's like. The main thing here is that it's like can you use the power of like live language models to go through all this information? So, like the original one we really spoke about was like medicine, so could you go through the body of all of like you know, medical papers. Could you extract like relevant information, like we're still kind of deciding what, you want to do.

Ben Field:

I mean, it's a very larval idea right now.

Justin Beaconsfield:

It's like maybe it's just like keywords or key not even keywords, but like key ideas that are discussed in each paper.

Justin Beaconsfield:

And then you like make each of those ideas like a node All the papers are like a node and then you just like start linking everything up and then it's like someone wants to explore some idea related to I don't know like some disease, some disease X, and they realize that disease X has this feature, that now, because it's represented like graphically, disease Y also has that feature, and it's like, oh, but we actually like solved disease Y, like we did this, and then we can really realize, oh, disease X is kind of like actually we hadn't considered that, but could we apply what we know here?

Justin Beaconsfield:

And that because I guess we've seen that there seems to be something that works really well with this graphical representation, like on social media and things like that. And the cool thing that, like, large language models, allows us to do really well is that previously, the task of like taking all these papers and just extracting key ideas is a really difficult task. And it's not a difficult task because we as humans aren't smart enough to do it. It's like quite a low level reasoning task. It's simple reasoning.

Samuel Wines:

Perfect for an AI.

Justin Beaconsfield:

But exactly, but it's at scale and that's the thing we can't do Like I can't sit there and read a million papers in a day.

Ben Field:

Well, that's the thing. This Swanson guy, he was like a doctor in the 80s. I don't even know if he was a doctor, I just like avid medical literature kind of nerd.

Samuel Wines:

But like it nerd in a complementary way we love nerds, here we do.

Ben Field:

But he just took in a large body of information that it was uncommon to take in in the combinatorial sense, and because of that he was able to connect these notes.

Samuel Wines:

Slip on my microphone.

Ben Field:

And like we, as we were researching this, it says like there's a whole field called literature based discovery.

Samuel Wines:

But I think you're taking it a whole other level, though. What you're doing is this is a meta literature review. This is like a literature review that creates literature reviews of all of the work and literature reviews and then finds the threads that connect them, like what it's like creating a mycelial network that can go out and weave together all, like this rich tapestry of humans knowledge, like we've gone. We've gone out into all these different disciplines and we've discovered so, so much, but what the issue is is we haven't found a way for them to communicate effectively together.

Ben Field:

And, like the internet, has been this incredible accelerator of access to information. Now we need better ways to traverse that information.

Samuel Wines:

Like.

Ben Field:

Google has opened up this whole world of the ability for, like a poly or of an autodidact, to kind of go off and read Wikipedia pages and find courses. But like, is that the optimal way to find information? Is that the optimal way to connect information?

Samuel Wines:

Can you reduce the foraging from having? To actually go out and do that to just something that has all of the food sources there for you to be able to go out.

Justin Beaconsfield:

And I think like also what will be interesting as well will be like we initially, when you create like a graph network like this of information, it starts becoming like easier for some human to like read in relevant information and like come up with novel ideas because they're like looking at the network and if you've got the example of like disease X and disease Y, maybe this like particular person is like trying to work on disease X and so now they can look at the graph network.

Justin Beaconsfield:

But it's like the next extension to that is well like we also seem to have a lot of like algorithms and AI that have existed now for over a decade. If like social media, that it's like really good at like drawing inferences from these graph networks and there's like graph neural nets, and so it's like, okay, what if we now, first of all, we use LLMs, we use large language models to create the graph network really efficiently, because it just like does all this tedious work for us, but then we use kind of like clever algorithms and clever AI of a different sort to now start reading in that information and drawing inferences from that and can we have something emerge or, at the very least, suggestions. And, yes, humans will probably need to be in the loop almost not, even probably. We certainly need to be in the loop of like, taking those suggestions and running with them and researching it. But can we use these things to? Like just point us in the right direction?

Ben Field:

just shine the light, just accelerate discovery, just accelerate the rate of the ability for any one individual to increase their output, increase their effectiveness and then, as if you do that across the scale of society, then you increase society's output, society's ability to innovate and to experiment and to find connections, like we increase the value of our informational assets.

Justin Beaconsfield:

It's scary, the other, I mean it's not scary. It's mostly good, but I guess, yeah, the thing that this is doing is it's like shining light in like the dark places we hadn't considered like or consider this, consider that, but then like evidently when you I'll be back.

Samuel Wines:

you guys keep going. Yeah, I'll see if I can drag Andrew in.

Justin Beaconsfield:

Yeah, yes, yeah. So actually, when you shine light into those kind of like dark places, you will also like wind up, like illuminating things that like will probably better for people not to discover. Like, how can you use these things to like be exploitative? Like, like you know, some pharmaceutical company could like come along, some like new drug that I don't know what it does, but, like you know, it's like detrimental people, but gets people super hooked and it's like super easy to sell and but I mean that's the thing. It's like any new bit of technology has pros and cons. I think overall it's super positive. It's just. It's just like always something you got to consider. It's like all right creating new power, Just make sure, like you do as much as you can to like funnel it towards good, not bad.

Ben Field:

I do think if you're going to work in technology, you kind of have to be an optimist for the value of technology as well. Right, like it's kind of I mean, we've spoken about like Zuckerberg's appearance on Lex, but that shifted my opinion of Zuckerberg a little bit in the sense of like I actually think he like truly, but like whether what he's building is for the good of humanity or not is a separate question but like, I think if you're going to build things, you have to believe that you're building stuff for good. And that doesn't mean being like polyamoration, being like blindly optimistic, but I think you have to believe that if you're going to create technology, it creates more net good on on the it's net impact is is good. I think the progress is good. You have to believe the progress is good.

Justin Beaconsfield:

I mean, yes, to an extent I've been, but I wouldn't to me I don't think it's like. The way I would like to think about it is not something to reduce it to something so simple as to say our progress is good or progress is bad. I think it's to say that some progress is good, some progress is bad, and that we should stay aware and we don't just like blanket enjoy all progress because like okay, like you take the case of Zuckerberg, you know, I Personally don't think that social media, or at least in the form it's in, is like fantastic for the world. I think, like we've got like a lot of mental health issues and whatnot those stem from social media. I don't think it's fantastic. But if you ask Zuckerberg about it, I truly believe that he whole hardly thinks he's doing a good thing for the world. He's connecting people.

Justin Beaconsfield:

Yeah and I guess what I'm saying is that I Maybe you have to be blindly optimistic to be able to be the one to create it, but it doesn't necessarily mean that you're gonna do good for the world, and I think I think it is always worth Like, constantly question you have to work doing it for the world.

Justin Beaconsfield:

No, I know, but I guess what that's. What I'm saying Is it's like you can't just like blindly believe that things are gonna be good and that, and because that can lead you down the wrong path. It can lead you to like blinding yourself. So, I think it's always worth staying vigilant and staying aware of like I Don't know yet, just just trying to make sure that whatever you're building is gonna have.

Ben Field:

I think, vigilance, but optimist, optimistic vigilance. I have to think that you believe that technology, when done correctly, creates good.

Justin Beaconsfield:

Yeah.

Ben Field:

What's the point in making it?

Justin Beaconsfield:

I feel like I've heard, like Lex Freeman talking about this a bit, but he talks about like, like. You have to be an idealist in the extent that you have to have a vision For a better world that you can pursue. If you sit there as like a skeptic and you're like the world's messed and everything's terrible, well then you're almost just. Why not manifesting that in the world itself? Whereas, like the idea of like, have this like idealistic view of an ideal future and then work Towards it.

Justin Beaconsfield:

You have to, you have to believe in a better future if you want to like make the world better, you absolutely have to be optimistic and believe in a better future. I but I think just like still trading with caution and not just like blindly thinking that everything you do that Changes the world will change it for the better is also really important.

Samuel Wines:

I think what you're just doing there is essentially speaking to the language of Complex. Do you want some more water? Yeah.

Andrew Gray:

I'll get.

Samuel Wines:

I'll get back to that sentiment in a second.

Samuel Wines:

What you're essentially saying is, like you're kind of referring to Just a complex adaptive system like a socio-ecological system where you, you have the propensity to Learn and adapt and evolve in an ongoing manner. And it's what you need to be doing when we're trying to create this technology is that we need to be having these feedback loops of checking and ensuring that Like it's like it's a designally approach to to tech and innovation. Where we're ensuring like is this, you know, is this supporting Resilient, regenerative future? Is this actively Helping bring humanity within planetary boundaries? Or raising social foundations for everyone?

Samuel Wines:

And and being being real with that and looking at that and having strong views loosely held, so that if something comes back and you need to be willing and open and accepting of Dissonance and going, okay, cool, this is not what I thought. And then you sit with that and you and you're using collective intelligence, not just one person sitting there you go, okay, is this like just questions, constantly questions, it's like this. There's no answer that will be right 100% of the time. The tech and the things that we do now will be the problems of tomorrow, but how can we do our best to ensure that, for the most part, these technologies are going to be net positive? You know, and and these things, it takes more time to do things ethically and Sustainably than it does to not. But I think what's fascinating about tools like this is it can actually open up and allow you to have more effective deliberation and still be somewhat Rapid in the development process.

Ben Field:

I guess. But yeah, I mean it'll be. We've still got a big engineering challenge ahead of us to build it, but I think the Concepts are really.

Samuel Wines:

Man, I'm 100% yeah like I'm so down to try and find some capital to support something like that. I think it's Such a fascinating concept to be able to like effectively and systematically address all of the let's say, the challenges that we might face in our current Bioregion or any Bioregion, and it could be a really useful tool and platform to even, like Andrew was saying before, like just making sense of scientific data. I mean, like actually like that paper, that novel information. It's actually like we've run some like a root that can can map it like actually it's might not be a hundred percent factual or might have been sensationalized, or or the opposite. It might find these weak links and inferences between things, or it's like you should double click on that because that could be something really valuable there.

Ben Field:

I'd be really interested in digging into more with Andrew how he envisions that looking like, because I don't really have a good mental model right now of like how you could identify, like what the signals for poor research first good research would be, I mean. I mean, obviously there's like p-values and all that sort of stuff, but I don't know how good an LLM would be.

Justin Beaconsfield:

It like but it might just be the Next phase of like like the actual assessment of the ground. Yeah, it's like, yeah, originally, all you're getting is just like mapping all these things and for sure You're not necessarily getting an inference from that. It's like well, once you've mapped them, do you start just seeing finding like key relationships between like good papers and bad papers, and it's like how?

Samuel Wines:

it sits in the graph network.

Justin Beaconsfield:

I guess that paper just like, has certain Relationships to other things. It's like oh well, that's a, that's a. Probably a bad paper can be hard if data is not open.

Samuel Wines:

Which is which is why the thing like I don't know if we've told you about start it stand. I was reporting on data initiatives. It's it's a paper we co-authored with Jack none and about a bajillion other people on, like how can you standardize Literally data?

Samuel Wines:

and reporting and how can you have it in a immutable ledger so you can have provenance of data, see all of the connections, link everything together in an open innovation framework. To me that feels like a real way of the future if we can find a way to ensure that it can't be gamified for For the negative, like that. That radical transparency, open innovation, collaborative innovation to me feels like a really strong potential attractor if we can lay the right architectural foundations from like a infrastructure perspective, from a social structure perspective and like a Cultural or superstructure perspective do you think part of that is also just giving making, lowering the barrier of skills and resources needed to reproduce a paper?

Ben Field:

like, for instance, if I want to like the reproducibility Crisis is very real and, like, if I want to, the easiest way to test of a paper is legit or not is if I can go and replicate its results. But like to replicate its results. I need a lab and I need, like, a certain set of skills and can you do that in silico?

Samuel Wines:

Is that what you're sort of saying?

Ben Field:

in silico, or do you have like cloud laboratories where you can kind of like automate the reproduction of experiments, like, obviously, what this is speculative, like we're a long way off. There are so many advances we need in Automation and robotics and all this sort of stuff before we can get there. But I imagine, like if you could find a way to like systematize the reproduction of papers without that, whilst lowering the strain on, like our Human intelligence, like on our honor, on the humans who are able to to go and do that.

Justin Beaconsfield:

And it would be interesting and be interesting, those just like labs dedicated entirely to reproducibility.

Ben Field:

Yeah, you also have people to staff the labs, right. But if you could have a robot staff the labs and you can have like a Way of passing papers that like extracts the method and and kind of puts into pseudo code that the robots can understand, open science that are actually Actively doing that.

Justin Beaconsfield:

Yeah, I wouldn't surprise me if there was I can imagine, plenty of people.

Ben Field:

Yeah, I'm gonna be awesome. I mean, lee Cronin just came out with like the computer, which like isn't quite this, but it was like automated chemical synthesis. And then you told me about someone else who was doing something very similar. Yeah, we just had someone start in the lab.

Samuel Wines:

Oh, really, yeah, today, and that's why I went out to try and find Andrew. I'm like, oh, he's actively helping Bersin from Caramex sort of set up his stuff. Look at us on our phones on a podcast.

Justin Beaconsfield:

What are the screen ages? I know.

Samuel Wines:

It's horrible. I was actually my. My excuse is I was trying to figure out what that um, that that group was, that it's actually doing exactly what we're speaking to. It's, oh yeah, open science. It's a guy from Virginia Unity Center for open science.

Justin Beaconsfield:

I think so they're literally scrolling tick tock.

Andrew Gray:

That's an alt-right content that's coming through.

Samuel Wines:

Anyway, what is um, what's rather?

Justin Beaconsfield:

Good question it's. We're still figuring that out but what?

Ben Field:

like what is the name, or like what is no?

Samuel Wines:

no, I mean, you can go there if you want.

Ben Field:

No, I mean, the name is completely uninteresting, because we just like we're originally based, to which like has a bit more grounding and like kind of a. I mean just obviously based to numbering system, but like you weren't the first nerds to think about.

Justin Beaconsfield:

Even close from hey.

Samuel Wines:

Hey, hang on yeah let's give him some intro music.

Ben Field:

Oh, I feel like a punk.

Samuel Wines:

Andrew Gray is in the house.

Andrew Gray:

Oh nice.

Samuel Wines:

Yeah, we're here.

Andrew Gray:

What are you guys been talking about? Oh?

Ben Field:

all over the place, but we were talking about the, what we were talking about before oh yeah with the kind of idea of like can we transmute our scientific body of knowledge into a Different medium?

Samuel Wines:

So I think web Ben was trying to make sense of so when you were saying how that would be useful in science, I think Ben and myself couldn't articulate it because you're kind of like, oh, that'd be really interesting if you could do x.

Ben Field:

Just in regards to like identifying bad science or identifying like poorly reproducible. Oh, yeah, I'd be really interested in, like what signals you'd look for to follow.

Andrew Gray:

Man pub. Here's a really good example of that. Like with plenty of people in the community that you know Go there for a good laugh, hmm about, like you know, bad science being reported it's for it's not funny, but it can be funny what people actually think they can get away with when they're going to like peer review it. Images is a big one, so a lot of people will doctor their images.

Samuel Wines:

Oh man right Gel images and stuff like that yeah paste.

Andrew Gray:

That's pretty bad, even colonies. So, like you know, bacteria plates, petri dishes, so trying to say that they had all like here we were successful in transforming this bacteria, but looking, I guess what I was thinking about was if you have these, this, this map.

Andrew Gray:

I guess of all these different correlations of Papers and the you know, the linkages between the papers and the data, not just the images but the actual conclusions, which could be translated by you know, an AI, if those you know we're talking initially before about well, correlations that might not be as strong could be an indicator of Something novel and interesting and worth for you know, checking out. But it could also highlight, well, maybe this wasn't done correctly or something else, so it's gonna be. Could be one or the other right, it could be something novel, something worth pursuing, or could be Something that needs more scrutiny.

Samuel Wines:

Well, there's like a nice overlap there where it's like um, the things that are potentially like wrong, also probably the things worth probably checking out the most, because they could also be Discovery yeah, exactly, that's not like a new thing to like graph networks, that's just like a no, no, no, just that, that, the novelty being where something intriguing and interesting is, I think, is yeah, you hit the nail on the head there.

Justin Beaconsfield:

No but. But it's a cool thing also just to highlight Novel things in general like a graph network and like be like if you have now an existing graph Network and you add something to it and it goes oh this looks particularly novel for a paper like. That's looking a bunch of ideas, it's like that. That just like helps us alert the public on some like now quite Objective or quantifiable measure of like oh, this has like a high novelty score. Let's look over here.

Ben Field:

It's being bacon, incentives for research into that and being, like you know, here is like a bounty or something for exploring this area also on the note that Andrew brought up before it kind of there's an incentive problem, clearly of like you only incentivize to produce positive results.

Andrew Gray:

It's like that. We're really sure we want to incentivize your failures, but like they should right.

Ben Field:

Like a failure if it is not just how we did this and it didn't work, but like it is a Educational failure in that, like it points you towards a Strategy or a result that like is informative. Like a informative in that this is not the route to go down. Like a failure can be as informative as a Positive result, but we don't incentivize failures and so, like a, it has the downstream effect of incentivizing like faked research, but also like we're maybe losing a lot of value.

Andrew Gray:

Yeah, it's a tough one, right, cuz, like nobody publishes I mean you. You might publish failures as part of a larger data set, like, but ultimately, what's gonna drive the publication of that that paper is because you've discovered something of you, advanced, you know. When I say advanced, you know I mean like you discovered a new thing and there's, you know, good science to back it up, and the new thing is not that it didn't work.

Samuel Wines:

I. Unfortunately, because it costs money like journals actually positively negative is a plus one. There's a journal of negative results in biomedicine. There's one for ecology and evolutionary biology Journal of articles in support of the null hypothesis Pharmaceutical negative results, because I imagine there is some salience in those things.

Justin Beaconsfield:

In the context of linked together.

Samuel Wines:

It makes way more sense individually.

Justin Beaconsfield:

It's kind of I can see it being like not that useful or if we're talking about, like efficient use of resources, like how how many studies have probably been like Completed on something that someone else already figured out just like wasn't that interesting, so they never really like did a report on it or 10 years ago, and so someone's like I'm gonna explore this and they just didn't have a reference for someone to be like try to, it didn't work, which I mean sometimes you want to do.

Ben Field:

An externality of not publishing is actually quite high.

Andrew Gray:

Yeah, sometimes you still want to try those things but, it's like, um, if it's not, it's really strongly in the first place, then maybe we could just get better allocate our reason for it to get published it would need to be peer review, like it would have to go up in front of a review board. So there's resources need to be allocated.

Samuel Wines:

But I wonder if you know, maybe AI in this case could you just submit it fascinating and just see if it can actually peer review Like an open innovation or an open science, ai mediated peer review where anyone can publish for free and you want to publish both positive and negative, and then you have an llm or something in the background that can weave it all together and make it contextually relevant for what you need or want.

Andrew Gray:

I'm sure there's lots of arguments for and against, but I mean I don't Like.

Ben Field:

A peer reviewer is quite a A highly experienced person, right like I mean you would but like. I don't know if an llm it might be good at like a first pass of a peer review, or maybe, if you could like, Can I think it would supplement.

Andrew Gray:

It wouldn't be a thing that you just rely on. Yeah, yeah, yeah, just like you wouldn't, rely on publishers and peer viewers to be the beyond end all of having because, like, how does this Bad science get out there?

Justin Beaconsfield:

but that's one of the thing is just expediting these processes. Yeah, exactly like like I forget where I heard this someone just talking about like just kind of Kind of criticizing not criticizing, but suggesting that research in academia was going to start being like uh, cannibalized largely by like private research, because academia is just so slow and like peer, peer reviews take time and these things are like inefficient.

Andrew Gray:

Pre-publications now the bio.

Justin Beaconsfield:

Yeah, and so just things that can export out those processes make it quicker, because I mean, there's so many benefits to academia and and that being the route that we kind of like do research.

Samuel Wines:

Hmm, but also like, like you just called out, like, for example, like we're looking at setting up our own research arm. Because of exactly that, like, if you can remove, I guess, multiple layers of bureaucracy have, have quicker iterations and feedback loops, you can potentially come to something novel way quicker without all the bureaucracy, but obviously the catch being like you need to have the funding to be able to back it and you need to have the expertise, so that can be very difficult, like kettle of fish funding.

Andrew Gray:

Yeah, yeah but I really like the idea of what you guys were talking about, this sort of like this graph network of publications, because for me, like you know, being able to you know help for example, bersin with his machine vision drug discovery algorithm, like even just setting up a basic experiment. You know we went through I don't know how many Papers trying to back up every idea. It's such a tedious process.

Andrew Gray:

Um. So if you had something there that could at least Point you in the right direction, or the multiple different directions, you could go down out of interest.

Justin Beaconsfield:

When you're so, you're like You're going through papers to try it like in this process. What are you looking for in each of these papers?

Andrew Gray:

methodologies. So looking at the methods and looking at the conclusions like looking at the data sets to see what they got Is this to see, if you should like deciding what to test on.

Justin Beaconsfield:

Is this to see, like, where there might be things that you can apply Yourselves, like trying to find like links. What do I mean Like? What are you trying? What trends are you trying to see by looking at these methodologies and results?

Andrew Gray:

Right. So the outcome for us, when we're looking at the papers, is to be able to create a, I guess, unique experiment to you know. In this case it was to create a. We wanted a baseline, set up an experiment that could give us a baseline so that we could test these compounds against neurons and Be able to get some measures, some sort of response. You know how the neurons behaving in response to this, the dosing of this particular compound, and you know so, not being a neuroscientist myself. I'm relying heavily on papers being published, and it's also nice to have a building full of neuroscientists Cortical hey, do you know how many millivolts neuron produces?

Andrew Gray:

Let me go ask. That's pretty cool yeah but, um, to set up that experiment, you know you need to like how? How much does a Neuron produce? As far, as millivolts, like. So you're going through literature, you're asking these questions and then when you find, I suppose, the right an, a well an answer, then you're looking okay, well, how do they obtain these results?

Ben Field:

And so that's gonna basically become the the basis of that experiment, mm-hmm the other interesting thing is like You're looking through neuroscience papers but then like maybe there's some interesting insight in machine learning papers. Maybe there's some interesting insight machine vision.

Samuel Wines:

That's exactly right.

Ben Field:

But like, yeah, I'm very excited by this idea of like being able to identify like connections of a to c by identifying a to b and then b to c.

Samuel Wines:

It's, it's going to be the most valuable and important area for science to progress. Like, if you look in the disciplinary areas, like it's just so much more effort to find anything new and novel at the moment, like, like you see, papers with, like it used to be, one person could write a paper and advance the field. Now there's teams of like 30 or 40 on a paper and it's tiny. Like the overlapping intersectionality of how, like you know, design, art, science, engineering, computer programming all over and interrelate with one another, that that is the ripe area for these innovations and new ways of Thinking, doing and feelings come about.

Justin Beaconsfield:

I wonder how much, like In terms of, yeah, the biggest advancements and exactly based off what you're saying, like I wonder how much in terms of like advancements that will come will come from humans, like discovering, like truly novel ideas, versus how much it will be A matter of us just linking up all the ideas we already have like there's we've, we've discovered so many things in isolation.

Samuel Wines:

of innovation Is like linking two things together that weren't linked to before, like so yeah, yeah.

Ben Field:

It's like there's a lot of talk about oh, we'll have a gi, when we have an ai that can discover novel science. And a lot of people take that to mean like. It means when ai and like can come up with the final leap, like the final insight To create some novel science. But there's probably a lot of novel science. There is a lot of novel science sitting in, just like connecting the nodes of the science We've already discovered.

Samuel Wines:

Um well, like look at all of the leading fields at the moment biochemistry, like that's an interdisciplinary thing Like bioinformatics, bioinformatics. It's all these overlapping ai in biology or whatever it is, it's it's fin tech, it's all of these overlapping areas and what we're trying to say is like take a transdisciplinary approach, throw all of that together into a particle accelerator and see what pops out the other side.

Justin Beaconsfield:

It's crazy. Yeah, it is crazy. It's almost like we've just like exhausted so many fields now in terms of like we've discovered Most, almost like all the low hanging fruit in like so many fields, to the point that anything else we find from here. It was like a pretty pretty tough. So now the next thing is like the second degree of like all right now.

Samuel Wines:

Let's try all the different permutations of like, combining two of them, and see what they get what you've just perfectly articulated Is the fact that we have come to the end of the functional use of breaking things apart and looking at the smaller components, we have mastered the art of reductionism To the point at which we get so small that we actually can't even use it anymore, because it's like heisenberg uncertainty principle it's all quantum down there and you can't even know if, like it's a field, it's not even a fixed thing that the, the linear Newtonian mechanics doesn't work, so what you've just done is essentially call out exactly what's happening.

Ben Field:

It's like we're coming to a paradigm shift where we have to acknowledge that all of these disparate disciplines are all interrelated and interconnected also, like categories are arbitrary human inventions, like categorization is is inherently fairly, like arbitrary, like the boundary between like chemistry and biology and the boundary between like chemistry and physics is like, fairly, like fairly arbitrary, fairly drawn in the sand and like, but their convenient categories because Human beings limited that. If we can enhance the ability of a human being to like, a language doesn't allow for, you know, yeah, non boundary. Yeah exactly everything has to be out of my.

Justin Beaconsfield:

Language almost just like, reflects, like, the inner workings of our minds, like our minds don't really like we've categorized things like inherently, like that's just what we do. Everything else we just really struggled a few things as fluid. We like them as fixed.

Samuel Wines:

So language is kind of pro reflects well, especially, we like to look at things. We're mostly adjectives, we're mostly describing words, we're mostly nouns, we like. There are languages where it's all verbs and that's way more in alignment with how the world works because, like, the world is a process, it's not a fixed state.

Andrew Gray:

So on that, how does it? Something like AI, which is, you know, it's like chat you TP, for example, which is literally a word predictor, like how does that?

Ben Field:

like to call it a word synthesizer? Excuse me, how does that?

Andrew Gray:

deal with. You know, do you reckon that in your experience has been sort of Doesn't, it doesn't respond to these borders?

Ben Field:

Well, not that it doesn't respond to the borders, but that, like, from a breadth perspective, it is Far more into, it is far more well educated than any human being on earth. Right, it's not, from a depth perspective, more well educated than than any expert in any field, but like it allows me to like, if I know how to program from like a Higher level, like I understand kind of like what's going on, what a programming language is. It allows me to like obfuscate the need to have memorized like the syntax of javascript. If I own the node python. It means like I can like write a web app without having to spend as much time like Figuring out like where to put my semi colon.

Ben Field:

Yeah, and I think that extends to a lot of fields, right, like I think. Now it means that If your expertise is in field x, you can kind of borrow some tools from field y without having to like Take a real detour to memorize how to use those tools. You can like operate at one higher level of abstraction, which means you can like jump boundaries much more easily than you could before because you don't need to learn the techniques at the at the syntactic level or at the like Syntactic level or at the like specifics level. You can communicate at the level of ideas more easily now.

Justin Beaconsfield:

I think, on the like your question of like Do these large language models tend to categorize things? I would say like. I would say they're probably it's. It's hard to give you like a clean answer to that. My sense is that they Do a little bit because they're trained on um, like human data and so like. Inherently, there is a lot of um Just mimicking. You know what we do, so the way and the way we see problems. But that said, it's also like Does do a really fantastic job of like generalizing concepts to some extent. So there's, and and also there is like, this idea of like Contained in this large language model.

Justin Beaconsfield:

It's like all this information and all we get to see is input, output but, we don't necessarily get to see all the inner workings of the language model and we there's a lot of work going into that, but they're, in terms of how far along they are, it's like essentially nowhere and and it's like so I mean it's a hard thing to answer because it's hard to. We don't know how these things see the world. We don't know if they're categorizing, like we are, because we're like Still very in the dark about what they're doing and always sort of seeing is we're just getting responses to the questions we ask. But part of that can also be like well, maybe we're just not Asking the right questions.

Ben Field:

I mean, they don't have new knowledge, they just have the knowledge that is encoded in human text. But like what they are is a new way of like Conveening with the hive mind. Like, up until this point, like the, the internet was an incredible tool because it allowed the Distribution speed of information to Increase by several orders of magnitude. Like now, you don't need to go to the library and like Leaf through textbooks, you can kind of like find those, find access to that information at a much, much higher rate. But now, like, we've large language models allow you to like query the body of human knowledge in a different way and in a quickly improving way. Is there a fat? So there's this format in.

Andrew Gray:

That we use in biology, called faster. I can't know what it stands for. Dot j, oh yeah.

Ben Field:

That's, that's in like bioinformatics. Yeah, yeah, yeah. So you can basically get like your.

Andrew Gray:

DNA Sequence. Yeah, so you download a fast file. I wonder why it's in a faster file, like it's just ATCG.

Ben Field:

I like, yeah, it's fast.

Samuel Wines:

Aii software, one of the first tools developed to search for similarities in protein and nucleotide sequences.

Ben Field:

Oh, so it was just like a good way of giving it to the software. Yeah, yeah, so is there? Is there a plugin for that? On check.

Samuel Wines:

I don't know I'll, I'll just ask definitely would be keen to do you have a plugin Faster, because that I feel like once you start like, especially when you're talking about fields like synthetic Biology.

Andrew Gray:

I don't have a specific plugin. I don't know why I've got such a jovial voice for it. But I don't have a specific, probably exclusively Turn down Jovial yeah format data, but I can certainly assist with tasks related to faster files using my existing capabilities these tasks.

Ben Field:

I mean, I think a large language model would kind of struggle with like DNA sequences because they're not represented very well and like they're just like enormous files that require, like, high levels of precision and like large language models can like create a simile of like a DNA sequence, but it's probably. It's definitely not gonna be good at creating an exact DNA sequence and it doesn't like because it's just essentially a next word predictor. I don't know, I actually don't know enough about bioinformatics to know where it would be useful or not. I don't know, I don't know. I don't know. I actually don't know enough about bioinformatics to know where it would be useful or not.

Justin Beaconsfield:

I mean the thing I think with Like the DNA sequencing in terms of like AI being useful.

Samuel Wines:

It's not, I would imagine it's not large language models.

Justin Beaconsfield:

But AI, like will be fantastic and I think like, like, if you ask, like what are the large language models doing where they're taking language and they're finding Patents in it? So like, it's like its task is to optimize, to like predict the next word in sequence. So it has to do like, I would say like, three main things like fantastically has to one understand like language and the structure of language impeccably. It has to pretty much know everything about the world, because if you want to predict the next word, it's going to come in the sentence like um, you you have to know, like uh, you know like a lamb is a baby, you know, you have to know that's a sheep like that. You can't predict the next word if you don't have that.

Justin Beaconsfield:

And then the other thing it needs to be able to do is it needs to be able to reason and like do like decently, like Decent amounts of reasoning. So it's like to be able to. So what it winds up doing is it bakes all those things into the model. Like those are the patterns that underlie human language, is like kind of like language, world, knowledge, reasoning, and it's like all right. So we've got a large language model. It's fantastic at finding patterns for those things. Now let's take similar architectures and Now what we're optimizing for is something completely different. And instead of trying to just, you know, hijack a next word predictor, why don't we just actually create AI systems that really, really intelligently start finding patterns in these really complex pieces of data and the patterns they're looking for entirely going to be entirely different to large language models.

Justin Beaconsfield:

But I would be like shocked if we don't get some like Incredible, incredible advancements in the next not very long time.

Ben Field:

neural networks how much like labeled biological, like DNA Sequences do we is that? Do we have a lot of label DNA where it's like this codon sequence, codes for like this particular?

Andrew Gray:

Yeah, there's a typical outcome like yeah, there's a. So I mean there's a lot of databases like, prime example, is gem bank, which is I don't know, how many Submissions are on gen bank Generally, anything that's going to be used in the future, like generally anything that does a um, any sort of announcement or any sort of discovery involving DNA or even like Coding or sequences, doesn't have to be huge.

Andrew Gray:

It could be a small little All-agonycliotide, which is just a fancy word for small piece of DNA. Um, those would generally get labeled and submitted on there, so like if you could Upload that and you know, maybe it's not something like chat gtp, maybe that's just calling on a plugin initially to you know, translate what the prompts are, to then be able to search that and then work with those sequences. That's yeah, it'd be pretty comprehensive, I imagine.

Ben Field:

I'm sure a lot of people I mean I know a lot of people are looking at like AI and biology. It's a super interesting space because we do have a shitload of data. We have so much data but the problem is we don't necessarily also, like in the case of DNA, it's we present it linearly but like it's this incredible. You can probably just as easily represent DNA as, like, a complex multi-dimensional structure where like this has a feedback effect on that and 100%.

Andrew Gray:

I mean the other thing, the other sort of that too, is that, like you know, there's not a lot of you know like the the genome announced like the human genome project Pretty sure that's just Craig Ventra's genome, but like so much Like research and so much like medicine has been based off that genome. Yeah, it's just yeah and I know there's projects now to like kind of make Genomics more equitable as far as like what data sets we're using, not just from one wide dude, but you know from you know all sorts of different backgrounds, so that's also a caveat there.

Andrew Gray:

But uh, yeah, that's man just trying to think of. Like what, what I would ask? It Got nothing. Yeah, that's the thing it's like. I remember when we built the lab initially for Barquis, it was like oh, we built the thing Like what do we do with it? I don't know.

Samuel Wines:

The first thing's always. I didn't think that far Exactly. Maybe we make Hangover free beer.

Andrew Gray:

Yeah, that didn't work out. Did you try to do that?

Ben Field:

Yeah, we did, but it didn't.

Justin Beaconsfield:

Yeah, how did you try?

Ben Field:

to make Hangover free beer.

Andrew Gray:

Oh, I always hangover in quotation marks free beer.

Ben Field:

How would you make Hangover free beer Like do we know what causes? Isn't it just like dehydration? What's a hangover? Actually, I have no idea what a hangover is. No, it's acetyl.

Andrew Gray:

This is a thing called acetyl aldehyde, so it's like your body breaks down ethanol into acetyl aldehyde and that forms ketone bodies with all these things which flag your immune system to go and attack it. So, like the issue isn't that your body's breaking it down into that, it breaks it down into acetyl aldehyde, which forms these things faster than it can get rid of the acetyl aldehyde. So that builds up over time and your body doesn't clear it out as fast as it produces it from the ethanol, and then that causes your hangover.

Ben Field:

I'm sure there's some If you could make really truly Hangover free beer.

Justin Beaconsfield:

I think somebody's worked on that I was gonna say you would pay off the lab in like 12 seconds.

Ben Field:

I think the problem is people just drink more of it and then get like alcohol poisoning?

Justin Beaconsfield:

Yeah, it's a little lousy yeah.

Samuel Wines:

And then straight up, what's that maximum power? Is it maximum power principle or something like that? It's like whenever you have like energy efficiency people, just then ram it back up and use more of it. I feel like it's maximum alcohol principle.

Ben Field:

Exactly. I'm sure that it'll also be like some crazy negative externalities where, like, oh, it doesn't make acetyl aldehyde, but it like.

Samuel Wines:

Does make you go blind.

Justin Beaconsfield:

And that was how we sterilized half the country accidentally. Yeah, yeah, yeah.

Ben Field:

It's like the start of IAB legend.

Justin Beaconsfield:

It's not a cure for cancer, it's just like everyone's getting drunk on hangover for a beer and we all turn into zombies.

Samuel Wines:

I think we've just got ourselves the next Netflix exclusive. We never actually explained what Rava is.

Justin Beaconsfield:

Oh yeah, okay, yeah, yeah. Well, I mean, it's fairly ill-defined at the moment. Like I think we're just, we can do better than we did.

Samuel Wines:

Yeah, the best things are all defined.

Justin Beaconsfield:

So I mean, the thinking was like Ben and I got really into exploring the application of artificial intelligence and it was like okay, what we've noticed is there is a massive translational gap between, like, most organizations and the capabilities of these new tools, and most organizations could be using these tools to make themselves way more efficient, but people just don't really know how, they don't know where to look, and, as the kind of quote unquote experts in the field, which didn't even necessarily need to involve knowing much about like these tools, we thought, all right, we've got the capacity to really help with this translational gap, with organizations being able to like, apply these tools, and we can use that then to start finding all sorts of things that will be useful for organizations, and so we can help a bunch of organizations out along the way and then, in doing so, start to find some products that we could commercialize or products that could be just like, really good for the world, all sorts of things like that.

Justin Beaconsfield:

And we're still figuring all those sort of things out, but that's like. The main idea is like all right, let's bridge the gap of like what these tools do and like where they can be used. We'll go into organizations, find the problem spots and then, like, eventually productize those things.

Samuel Wines:

Is there because you are working with a few people already? Ben?

Ben Field:

Yeah, we're working with like kind of a fairly broad range of companies at the moment, but we are thinking that we will kind of niche down just from a.

Ben Field:

I think if you can speak the jargon of a particular industry, it really helps and also, like it makes your services more repeatable. It puts like less overhead on us to kind of come up with novel ideas and learn the lay of the land every time and like, all right, if we work with like a lot of law firms or whatever, then we know the jargon of the law industry and we know how to like deliver things that actually solve problems, rather than rinse and repeat the same basic AI idea for like lots of different industries. But I mean, I think like the reason we're doing all this consulting stuff is just because we want to get good at building and good at executing. But I think we both have like a mind to. We probably want to build a product or build like. The thing that gets us excited is stuff like all right, can we like make a knowledge graph out of all of the world's scientific information? Like stuff like that is is really exciting.

Samuel Wines:

But I think we want to pay the bills in the meantime, yeah.

Ben Field:

And also like know how to solve small problems before you can solve big problems.

Samuel Wines:

I think it's a really effective way of like how do I frame this? It's almost like you're you're learning through doing and iteratively designing things and then after a while you'll be like oh now, I think here is like a problem, or maybe not a problem, maybe a challenge area that we're quite fascinated by. Let's try and find a way to expand upon this and meaningfully contribute in a positive way.

Justin Beaconsfield:

I mean what?

Justin Beaconsfield:

What you find, I think a lot of the time is that you have like two key categories of people that kind of like both have like fatal flaws.

Justin Beaconsfield:

And that is, you either have like really, really technical people who try to build a product, but they're like not fantastic at that because they are like they don't really understand what non-technical people do and they just kind of like fail there.

Justin Beaconsfield:

Or you have like non-technical people who try to like build technical products but they just don't know where to begin. And Ben and I were like all right, we can probably try to do the that we could. We could try kind of not fall into either of those schools of like we're technical people but if we just spend like a long period of time inside organizations actually understanding like, all right, what is a non-technical small business do day to day and what are the actual issues that these businesses face, like what are the actual needs that people have? There's probably not a heap of technical people that like deeply exploring those things. So it seems like a really cool opportunity, as like as an exploration phase, for us to just like go into organizations, see what their problems are, actually understand those things and then start building tools and programs that address those needs.

Samuel Wines:

I have time for that, obviously.

Andrew Gray:

Really like what you said about setting those sort of as far as learning and education, like especially managing your own, your own education, setting those larger goals Like that's really cool, I'd like to get there one day. I don't know how to get there, but as long as you have that sort of on the radar, on the map of the things you'd like to do, everything that you learn along the way opens up new pathways towards that outcome.

Ben Field:

So, like I think I saw like I mean, I was in Melbourne, New York and like I learned a lot, but like we didn't actually make that much stuff, despite being like a master's of engineering. And like I used to make music when I was younger and like I ended up doing pretty decently with some of the songs I made. But like I reckon, as in hindsight, if I just made a hundred times more songs, I would have been a lot, I would have done a lot better. And like quantity isn't in conflict with quality. It's like indirect. It like directly correlates to quality, at least when you're at the learning stage.

Ben Field:

And like I think just building lots of stuff is the best way to like I learn a lot more from like the solar panel project that we did than I did in anything I did at uni. Like I think just building stuff is such an effective way to and a satisfying way to like get good at something. But yeah, it seems like fairly obvious that like all right, like and also given that like there's not really any, no one knows what this field is becoming Like you kind of just have to like go on Twitter and then like hack around on a GitHub link that you found and like see if you can like reverse engineer something that some like Russian guy did. Like like it's, it's. There's no like good university courses for this year.

Andrew Gray:

I was checking those links yeah it seems very like sense probe and respond.

Samuel Wines:

Like you constantly just like Ensuring that you've got streams of data or insights coming from multiple places and then you just look at integrating that and finding ways to apply it.

Justin Beaconsfield:

And it's like, kind of like Ben said, it's the field moves so quickly and it's changing so much that we also kind of have this awareness of, if we over commit to something that we'll probably, this just leaves you so prone to just like making something that is just not relevant in like 12 months, even like all these startups that I mean, a lot of them made a bit of money in the short time or whatever.

Justin Beaconsfield:

But, like you know, open AI, updates, chat, gpt and their whole startup just tanks because it's like oh, that's now just a feature on, on chat GPT.

Justin Beaconsfield:

Your whole startup is now just like that was like using chat GPT on a different. Like that's like, that's done, it's now just like that's a plugin. Or that's like you just chat GPT now just like does that? And so like, if you go too hard on trying to just like come up with an idea right away, without actually just like letting this field play out a bit and like really getting into the weeds of understanding what are these things not going to be able to do? Like what can you add to them?

Justin Beaconsfield:

that isn't just a feature on chat GPT, that that's like I think, a really necessary thing to do if you want things to stand and believe out building a better future.

Ben Field:

I also think, like if you oversubscribed an idea or like over invest in an idea early on, like the reason that I mean we've spoken a lot about this kind of knowledge graph thing, but that's because we spoke about it this morning, and like we're quite excited about it, but like we wouldn't have the bandwidth to work on that if we weren't like churning through projects quite quickly, it's like, oh, like no, we're actually just doing the startup that we've been working on for a whole year. We're not going to give up on the startup because, like just because new ideas come along, but like if you can just like complete ideas quite quickly, put them out in the world, see if you get a good response, iterate, it's like a way better way of doing it than, especially with something that moves this quickly, than like one year in stealth mode. Like building something like that your initial hypothesis was wrong. Or like you sacrificed working on the actually like way more important idea because, like you put all your chips on the table too early.

Justin Beaconsfield:

I mean like, even like a kind of like no, it's not the same, but like a kind of funny parallel. I think it was like the big consulting firms invested all this money after GPT-3 came out to like create their own, like bespoke, like custom language models that like built off, like GPT-3 and like it's like hundreds of millions of dollars and they like built them and then, by the time they were finished, gpt-4 was out, and it's just better than the thing they built custom like, even for their own custom use.

Justin Beaconsfield:

Gpt-4 was just better and all the work was like kind of down the toilet and it's like ah, that's the way like nature works as well, right?

Ben Field:

Like nature doesn't. Like nature places a lot of small bets and see which ones hit, yeah.

Samuel Wines:

Yeah, redundancy sometimes breeds resilience in a sense. It's like and then you figure out which ones work, which ones don't.

Justin Beaconsfield:

Yeah, yeah, yeah, I like that redundancy yeah.

Samuel Wines:

Anything else you guys want to talk about? I don't know.

Ben Field:

I think I'm, I think I'm. Anything you want to talk about, it doesn't have to be AI. You've got. You've got lots of good ideas cooking. What's excited you recently? Or a J.

Samuel Wines:

Well, yeah, we can. We'll give you a little bit of a recap of what's just. We're in the middle of trying to figure out like what a what event, like a systems informed venture studio would look like. We're also exploring what it could look like to do I don't want to use the word consulting but maybe like prototyping with large organizations, to try and go like Large organizations that actually want to make a meaningful, positive impact. How could we help you transition towards a more viable future? But yeah, I think what really excites us for the moment is the venture studio concept or it's a really cool idea.

Samuel Wines:

Well, like, yeah, the living systems Institute stuff I've talked about as well, like, what would it look like to be able to have spaces and places to teach Complexity, informed living systems thinking and ecological design?

Ben Field:

Yeah, I'm like teach through doing as well like teach Literally. It's a praxis.

Samuel Wines:

Yeah, yeah, I think. What about for you, andrew? I feel like the thing that'd be most exciting for you would be, by a cue, getting that back up and running.

Andrew Gray:

Yeah, the community live. I mean, that's like that's where it all began.

Samuel Wines:

Yeah, for me that's.

Andrew Gray:

That's where all this sort of took off from. And yeah, it's just kind of hard to move forward without acknowledging that that needs to be it's essential, yeah, it is. It's like I got my shoelace stuck in the door behind me. I'm trying to walk forward but I have to address that. You know it's Exciting setting up that up, saying that up again.

Andrew Gray:

I think with Broody impact neighborhoods over in Brunswick. So an old school renovate potentially 70 square. So initially the community lab was a, you know, in a shipping container, in a warehouse which is very Brunswicky, next to a brewery. But this this next time, with all the you know, we've learned so much along the way. We've got more resources now. We've got a huge network of really talented people around us, so there's just so many better up, so many opportunities that we didn't have when we set up initially on that. So this like 2.0 version of it's gonna be really exciting, not just for education but also for innovation. And, you know, giving people a really Safe place to fail, like early, early on. And if people don't want to go down the path of innovation, they just want to create for the sake of creating.

Samuel Wines:

That's also you know what it's about.

Andrew Gray:

It's not necessarily a for-profit motive. No, it's not. Like you know, we have to Commercialize everything, like it's primarily just curiosity driven, education driven which is like with like, we're both doing this.

Samuel Wines:

Some this course from Etihad, which is designing resilient regenerative systems and a big thing that, they say, is like it all comes from, obviously, education and teaching people how to think differently, but a lot of it is that primary research of just exploration and play and seeing what happens.

Samuel Wines:

Yeah you never know what's gonna happen. And then suddenly, maybe 10, 15 years later, you have the tech able to apply that at scale. But we shouldn't be necessarily focusing on everything being commercial. We shouldn't focus on everything being researched. It's like have space for both. That's why for from our point of view, it's really important to take that Perspective and provide places and contexts for people to experiment, also to bring that idea to reality in a commercial sense, yeah, and then hopefully, yeah, the. The education as well is another thing that we're really gonna try and push more.

Andrew Gray:

Yeah, I think one of the the major, one of my major observations with community lab back in the day was just how many people wanted to get experience. You know they they're studying these really technical degrees but there's not really anywhere to actually apply it in a way where they, you know they're designing the experiment from beginning to end. You know if they fail, they fail great, or generally. When you go to university there's no time to do that. It's like you're doing a completing, a very small portion.

Samuel Wines:

I guess I'm not prepared earlier. Yeah, just use this pipette to move this from collect the data right report. You know? Yeah, you don't have ownership over that.

Ben Field:

Yeah, you're making these like fake. I hated that like I hated labs at uni because it was like I have to make this like fake Report about this fake thing that we got told to do that like.

Samuel Wines:

Frosty hey, god, frostbite.

Ben Field:

Yeah, like, whereas, like if they told me like I don't know, like grow some rat neurons and like put them in a RC car and see if like Make a cyborg, like I think just actually giving people agency to actually make things, that it that is cool, is like that's the whole thing about engineering. It's like, yeah, it's cool that you like have agents. It's like magic you have like this new found agency of the world that you can go in like doing new things with it, and like it's so boring if it's just like write this report on like how much coffee was extracted from this cylinder in, like it's I don't know.

Ben Field:

I think you need to really fail at it. You need to really really fail.

Andrew Gray:

I mean like hands dirty here. It's set up to cater to a huge number of people. Yeah and there's no way that you can take a nuanced approach.

Ben Field:

Yeah, of course, at that scale.

Samuel Wines:

But that's where, like community organizations, I'm like also very mind like Universities I mean some of them have been around for thousands of years, but like they are still feudal age Organizations that have been brought up to speed with the latest Capitalist infused way of operating like you're dealing with an institution that hasn't really gone through a major phase shift and how it operates for a long period of time.

Samuel Wines:

The only thing they've done is contend, condensed, how long it takes. You know, back in the day I think I'm like Plato and Aristotle, all those guys like I think with Pythagoras is like oh, you want to come study with me? Like the first year, you need to be silent and just listen, just listen, you can't say anything like.

Samuel Wines:

Imagine trying to do that at a university. Now it's like no, like not even semesters, but trimesters. Let's cram it into a year and a half and get your money and get you out, and it's like when you turn it from something that's meant to be about, like you know, lighting the like, the curiosity, and like the flame of curiosity, you know, rather than the filling of a vessel, it's some yeah.

Ben Field:

I think we're at the greatest age in history to be able to teach yourself things and to be able to like Find other people who want to learn stuff. But like our institutions are failing and it's like really sad and there needs to be a replacement because you need to be able to like educate at scale.

Justin Beaconsfield:

But that's, that's why I'm gonna start like a, like a fifth century University, just like a completely like change the model back, like throw it back 1500 years or something give me plate.

Samuel Wines:

Oh yeah, and what is it? No, it's like Aristotle's Lyceum or Plato's Academy.

Ben Field:

Yeah, I think people at the stake Maybe less.

Justin Beaconsfield:

It's very selective, it's, and it's just like you know, like five, five young, bright minds admitted per year and they, yeah, they know how to talk.

Samuel Wines:

I like that like streak, like cone of silence From deep listing and observing that's the whole premise of that being quiet for a year. It's like yeah you should probably actually shut up before he think you know anything.

Ben Field:

Yeah, I know he did. He'll have the teal fellowship, which seems like an interesting kind of like he just gives people like smart.

Samuel Wines:

One leader of your blood each week. You're not a blood boy, you're a. You're an intern scholarship. Blood, in turn, impacts.

Andrew Gray:

There's what we were talking about before. There's a I saw an experiment calm, that whole apprenticeship style approach to teaching science, and that's a Elliot Roth from DIY bioorg. But I think he's also heading up. He started some sort of a algae and Sign a bacteria startup by manufacturing.

Samuel Wines:

It was called no.

Andrew Gray:

But I know the. I know the experiment that he was looking to run was specifically in education and what would it look like if we kind of had that more tailored approach, we had less people but more mentors? So you know, it was really about sort of the way I interpreted giving back where, like down the line, those who kind of went through the program would then become mentors or like a select, and for them to move on and like graduate, like you would have to have that it's tough because, like I feel, like I mean you feel like using the model of something so long ago that it was like the aristocrats that would like go to universities and now you, just you have I don't want to call it the- issue.

Justin Beaconsfield:

No, but you just you have like this set of new conditions where it's like lots of people are like pretty well educated and have access to resources and Lots of them are capable and like want to go to university, as opposed to like a very small, like slither of.

Ben Field:

Should everyone be going to university? Yeah, I think model actually like a much better model in general for like a few rather than like a few select disciplines like pure mathematics.

Justin Beaconsfield:

I think most people should be like. I think Most people should be getting higher education. Yeah, I mean, yeah, maybe the model needs to change, but it's like, yeah, the resource allocation was just maybe like a bit of a simpler task to Solve way back, when you just didn't have like massive Portions of society. That all want to go and get a degree like that just wasn't a thing you didn't have, like I don't know. What right do we have? Was, like, think, getting tertiary education think, oh yeah.

Samuel Wines:

And also then unemployment based off the back of that. It's like the going to the university for the degree which was guaranteed to get you a job, like 30 years ago. It's like no longer the case anymore. It's like sometimes it's a detriment. So it's like, in what ways can that way of operating make sense? Like if you want to pursue a path of research, like sure, that's great. Then what are, like the tech schools for a concept like how do we bring that sort of hands-on approach back? Because we crushed that, we destroyed them. They were everywhere and we pretty much got rid of most of them. You know it's only just coming through and having a revival now. But like what if we did, yeah, the tech school or the apprenticeship approach for all of these sort of more hands-on disciplines?

Justin Beaconsfield:

Yeah, it is weird with like created a general model of education, like secondary education, but yes. Most professions. You should probably be doing something vastly different to like the standard university model. Like, yeah, if you want to want, teach kids how to become programmers fresh I mean, even if it's not fresh out of high school. Like, let them just like I'm around for a few years. I think it's quite good for a lot of people to just do that. But I just don't know that University is. I mean, it's not a bad environment.

Samuel Wines:

Oh they're there. You can learn how to do it best environments you can find, but exactly it's the best of what exists.

Ben Field:

But I could totally imagine a way better environment to learn how to code maybe like a clockwork orange type, like put me in a chair and Strap my eyes and stolen LLM.

Justin Beaconsfield:

Yeah and even alternatively to clockwork as good as clockwork, I'd rather maybe just like really practical of like alright, your like whole semester is just like build, build something, and these are like the vague specs, but like yeah, I don't have to decide.

Andrew Gray:

Yeah, until you can. I mean again going back to this sort of like mass education model. Like you know, they need a way to Score yeah, and that's the challenge and couple that with the fact that you know, as Sam said, that people are. You know the. The narrative is that you go to uni, you increase your job prospects and then you go and get a job, whereas now like.

Andrew Gray:

What you're pointing to is experience like so when you're actually creating something, you're not learning about it. You're doing and you're learning through doing, but primarily you're getting experience. You're building a portfolio of work or a body of work that you can then go and show Perspective hires. That, like this, is what I'm capable of. And so that's, that's a. You know how do we Bring that into it? Because now it's a chicken and egg problem that everybody always talks about. I need a job to get experience, but I need experience to get the job and and the tricky thing is as well.

Justin Beaconsfield:

That's like right now, the way we get experience is through a job, but I mean, when you go work at some big company, the experience you wind up getting. A huge portion of that is not really relevant, like the thing you want to make your core competency, where it's like if you really want to learn how to become a program. I mean, maybe, maybe programming is actually not the best example. I don't know how much of a you know a big corporation.

Ben Field:

I mean neither of us have actually worked as programs.

Justin Beaconsfield:

Like less.

Andrew Gray:

I would argue it's less about that. It's less about like the what you're trying to demonstrate your proficiency, and it's like the things that aren't communicated you know, like you have the ability to work for a company. You're showing up on time, you're, you know doing these things, whereas if somebody off the street were to apply it's I don't know how they are they gonna show up.

Samuel Wines:

You know, are they gonna?

Andrew Gray:

yeah, they're gonna have a shower before they get here.

Justin Beaconsfield:

Yeah, so it's, but it's like I guess it would just be cool to have environments where we get, like we go to like an educational institution to get experience. Hmm but the experience we're getting, like we don't just have to get a job, we can get really, really tailored experience like exactly the thing we want to learn.

Andrew Gray:

It's still the experience getting there, like I think that they're actually trying to with the micro credentialing thing, like build your own degree, like you will get, it's no longer gonna be like a degree and like a bachelor's of science.

Samuel Wines:

It'll be like this very Nuance, what do you actually want a need for the direction you're trying to go? Yeah, yeah we'll make way more, way more sense. When that happens, I'm gonna have to leave soon.

Ben Field:

Yeah, so we're. Yeah, we're just about to jump into another meeting. Time has flown.

Samuel Wines:

Nate, just how always does, it's like a little time dilation bubbly crazy.

Justin Beaconsfield:

I cannot believe it's two.

Samuel Wines:

Yeah, we got some work to do this meeting about the venture studio cool. Um. Thanks so much. Is there anything you want to wrap by letting people know where to find you apart from at colabs HQ?

Justin Beaconsfield:

We just built our website. Oh and then, and then, if not also just on LinkedIn. Yeah, that's, that's it. Glorious rather our double a VA.

Samuel Wines:

All right, I guess we just tap out on this one.

Justin Beaconsfield:

Yeah, thanks so much. Thank you.

Samuel Wines:

Thank you for sticking around for this episode of The Strange Attractor with Ben and Justin from raava. As you can tell, this is really interesting sort of stuff that they're working on. That we find is quite fascinating the notion of being able to use AI to advance innovation and Research that is impact oriented and trying to keep humanity within the planetary boundaries while raising social foundations for as many beings on this planet as possible. That is pretty exciting stuff. So, yeah, if that's something that is of interest or if you're curious about some of the ideas that we've spoken about here, please reach out. We are a community driven, real-world laboratory or or experimenting and exploring pathways towards more resilient and regenerative future. Please drop us a line, join our community. We'd love to hear from you and we'd love to find ways to collaborate and coordinate towards that more viable future. Thank you.

People on this episode