AI Hack Lab Highlights: Understanding The Risks And Rewards Of Advanced Architectures

by Marios Roussos
Tanya Dadasheva and Roman Shaposhnik

Insights from AIfoundry.org Founders Tanya Dadasheva and Roman Shaposhnik

At the recent AI Hack Lab event, The Future had the opportunity to speak with Tanya Dadasheva and Roman Shaposhnik, the two founders of AIfoundry.org. They both shared their insights on the evolving intersection of AI, startups, and investment. The AI Hack Lab, a melting pot of innovation and forward-thinking, provided the perfect backdrop for a deep dive into the challenges and opportunities that define the AI landscape today.

With them, we explore the intricate dance between startups and enterprises, the critical role of investors, and the profound implications of AI technologies like transformers. Their candid discussion sheds light not only on the technical and business aspects of AI but also on the broader societal impacts and ethical considerations.

How do you see the relationship between AI, startups and Investors?

Tanya: I worked for a venture capital firm, so I was on the investor side for 8 years. Way too long, and they actually joke that should count as dog years, so it is 56. I think for enterprises, it is a way different from startups. For a startup, the most important thing is to integrate quickly, and you fail fast. The faster you can test your hypothesis, you can pivot, and find your way to the market, etc, the better. This is exactly what investors are looking for. Startups take the risk and run fast.

For enterprises is way different. For them, you need to take into account everything –  like what data went into the model will have biases and odds, will just because I’m deterministic nature of the model, give some results that could lead to some dangerous situation or offensive situation or to something-something. So, for enterprises that way, more things should be taken into consideration. But it’s not only that, it’s a question of privacy, for example, do you send the data to a closed company like open AI? So, it is no.

How do transformers help with that?

Tanya: With these transformers’ architecture, it is proven that you can make the model tell you the data it was trained on.  So basically, it is always a risk. The whole architecture by itself has a memory structure in it.

I don’t know why. But it does. And that means that if your personal data went into the training, and somebody else uses the model, they can get just from the model to give away your personal data. And that is also a big liability risk.

So basically, for enterprises, there are several risk factors, what data vendor to the model, what data gets sent to whom outside the perimeter of the organization? And also just what, where is it hosted? So when you train the model, when you run the models? Is it on your infrastructure? Or is it on somebody else’s?

It’s not only that open AI that you should care about, is all these other companies in between that you use to produce these models. So if because with open AI, it’s very clear just one company, and it’s the risk you can assess. But if you use LLaMA, for example, firstly, yes, you can download the weights, and you can run it locally, you don’t know what went into it, that’s fine. But then you need to create the stack. And the stack is also a bunch of other companies that might be getting your data.

So this is why when we’re saying Open Source, they mean on all steps, all the infrastructure that you’re running, so that you can run it locally inside the perimeter of the organization, all the tools that you understand what the tool is doing, how is it changing the model, etc. And the model itself, so you can know everything, but data went in what was the training cost to create all of this?

So for enterprises, it’s way more risky to do it. But of course, it’s like enterprises are the largest customers, always. They have a lot of budgets and a lot of use cases. And I think it makes sense that most of them, proof of concept, use open AI. And then once they prove that the business use case makes sense, they can go to key this infrastructure inside.

Roman: For startups is completely different. So, most of the startups are just fine using llama. Most of the startups. Well, I mean, not only that, but most of the startups die, like 90% of startups don’t make it. So nobody cares, you know, again, they exist to prove the point gets proven then you know, they figure, and they are using dummy data.

Tanya: And the investor side is like suicide because their side is really difficult, investors do know how to invest in software. For example, for models, it is like some instrument is used I as an investor can invest in the instrument because I will know whether everybody who creates the model will use it or not. It’s like it’s a question, but that’s about it.

The software that is close, that you can look at the model, you can look at weights, but it won’t help you just a bag of bits. So I think this that, venture capitalists made some bad investment models because mostly of FOMO, they were just so scared about opening the AI that they made those bets. But I think that only a couple of ways, from there, they will turn into profit companies like Open AI did. It is not a model company any more it is a product. So I think most of them either will go that direction and turn into product companies, or they will not have at some point to join independent research institutions. And for a venture capitalist, it’s a lost battle.

Mind the gap

Do you think in the past years, this gap between an investor and a startup is closing? I mean, they do understand the software, the product that models the technology behind that?

Tanya: Most of the startups these days are not that knowledgeable either. So even if you look at the landscape in the United States, and that’s kind of the most innovative one, there are a lot of startups on the application side, that are trying to make the kind of wrappers and even if you look at like land chain, or Long live index, both of those are kind of frameworks to manage the model. But all of them are looking at the model as a black box. So they’re kind of trying to make the protein or frameworks on top of it, kind of make it more deterministic than it is right now. And there is a bunch of investment down there.

But actually, things that most of those layers do disappear. Because they’re kind of solving the problem that exists right now, which is the models give bad results. But only if you accept the condition that you cannot do anything with the model. If it’s a black box, they make sense. But once they’re not a black box any more, it doesn’t.

So I think this is the interesting part. But again, this venture capitalist is fine, because they do understand this path of making waves happening.

But the interesting part is that the industry at some point was just rapidly expanding. Now, I think they are in a reduced stage when some of it will die. But the things that will be for longer, and of course, venture capitalists should like to educate themselves on it. But it is happening so fast that you don’t know what to ask this question.

Roman: Because you know, even when it is let’s say Google has a venture capitalist, they have no clue in transformers that we talked about. They came from Google. The venture people do not know that.

The need for a community

Do you think in the future there will be one community that works for AI etc?

Tanya: I think that is not one community you can just join and learn. At least not yet. And that is a problem. We are creating one, but I don’t think it would be just one, I think there would be a few. So actually like, besides our community, and all of you, that is pretty good. So, like, material AI foundation is surprisingly good in terms of community, they also cost a bunch of projects and try to do the educational work. Well, that’s an interesting community. There have been a bunch of others that tried to merge, like on the enterprise side, there is this AI Alliance, which is an alliance of big corporations, they also tried to do some educational work and publish some papers and everything.

So yeah, that’s actually why we just bootstrap the community. But when we started with all the individual contributors, mostly, at this point, they have people from different, for example, open models join us, because they don’t have a community of their own. They research institutions doing models.

What about Cyprus, do we have a community here?

Tanya: I am sure there are a lot of people doing the AI that I have no clue about. So let me say that.  I was hoping to meet people here at this event. And I’m happy to join other events that happened in the machine learning space. But there are not too many.

Back

Become a Speaker

Become a Speaker

Become a Partner

Subscribe for our weekly newsletter