Algebras AI Began With A Fridge Quote No One Could Translate

by Annetta Benzar
August 1, 2025
Algebras AI Translate

“Every language is a world,” wrote literary scholar George Steiner. It’s a world that speaks not only through words, but through tone, timing, context, and the weight of what remains between utterances. For Aira Mongush, that world began in southern Siberia, in the Tuvan language of her childhood, a language spoken by hundreds of thousands, but mostly invisible online.

One day, she wrote a quote in Tuvan and stuck it to her fridge. Her husband couldn’t read it, nor could any online dictionary translate it in a way that captured what she truly meant. Despite her years working in AI, no tool could translate her own language. And it wasn’t just a personal moment; it revealed something much larger: entire worlds of language were being left out of the digital age.

Together with a small team of linguists and researchers, Mongush built the first AI model for Tuvan-Russian translation. From there, the project grew. Communities speaking other underrepresented languages began reaching out. What started with one language became the platform, Algebras AI, co-founded by Aira Mongush (CEO), Diana Safina (CBDO), and Dima Pukhov (CTO), designed to translate not just vocabulary, but cultural logic, social nuance, and emotional intent.

In this exclusive interview with The Future Media, Aira Mongush, the founder and CEO of Algebras AI, discusses how Algebras AI is creating language technology that respects the complexity of underrepresented languages in a world still built for English.

1. When someone asks what Algebras AI is, where do you begin?

I usually start with what’s missing: the fact that AI models are overwhelmingly trained on English-centric data. And this isn’t just about machine translation; it’s true for every form of language generation, from summarization to customer support. The global AI field still lacks consistent fluency in most non-English languages. Even within English, different genres require additional fine-tuning, massive datasets, or niche model structures. Outside English, the problem becomes structural.

Then I explain that global companies already include localization in their business models. They do it not for favor but because it works. Post-localization, revenue increases by at least 30%, and in some successful cases, even by 700%. So, the economic value of multilingual fluency is already clear.

When people ask about our name, I explain that artificial intelligence, especially in its statistical and generative forms, is deeply rooted in linear algebra and matrix operations. But more than that, in abstract mathematics, algebras are elegant systems that preserve structure even as you change form. That’s exactly what we are trying to do with language: preserve logic, emotion, and intent across linguistic and cultural boundaries.

2. Algebras AI is built around non-English and underrepresented languages. Why was it important to start there, rather than with more globally dominant languages?

The idea of “underrepresented languages” is often misunderstood. People tend to assume we’re talking about tiny user bases or heritage preservation. But that’s not the case. According to UNESCO and the United Nations, there are six globally dominant international languages: English, Spanish, German, French, Russian, and Chinese. Even if we remove English, we’re still left with five languages that are spoken by hundreds of millions of speakers each. 

Beyond those six, there are around 490 national-level languages used in daily life by tens of millions of speakers. After that come hundreds more, and eventually several thousand ethnic and Indigenous languages, many of which are either endangered or critically under-resourced. So, this isn’t a small market, but it is one that has been massively overlooked.

We chose to start with non-English dominant languages because the lack of digital infrastructure for non-English content has real consequences. Language isn’t just a medium for content; it is a cultural bridge. It helps prevent misunderstanding, conflict, and erasure. A well-built translation model does more than convert meaning; it preserves social cues, emotional tone, and context that are often lost when localization is treated as a final step instead of a structural layer.

That’s why we often include notes, contextual layers, and guidance beyond simple one-to-one translation, because conveying the “why” behind a phrase often matters more than just getting the “what” right. So no, we’re not ignoring global languages. We’re working to rebalance the system so that non-English languages get their share of serious, technical, and thoughtful AI attention.

3. How does Algebras AI account for the local nuance in non-English languages, like dialect shifts or culturally embedded references?

To answer that, we need to begin with something fundamental: every language is shaped by its own internal logic, a way of structuring experience, relationships, and even emotion. That logic is encoded not only in words, but also in syntax, rhythm, register, and gesture. As a machine learning specialist, I’ve worked with multilingual datasets and model alignment across many families of languages, and I’ve seen how structure, not just vocabulary, determines meaning.

Some languages map relatively well onto each other structurally, especially those within the same language family. Others don’t. So, if you try to translate in a direct word order, you end up breaking the meaning. At Algebras AI, we approach this differently: we preserve the structure of the meaning. That requires attention to what’s encoded emotionally and semantically within language and figuring out how to reconstruct that in a completely different system.

Our engine keeps track of what we call “semantic anchors”, emotionally significant words, syntactic structures tied to cultural patterns, and phrases that carry more than literal meaning. Then we search for the closest structural equivalents in the target language to ensure the output doesn’t just sound correct, but actually feels fluent, genre-appropriate, and emotionally honest.

We’re still early in our development, and we know that dialect handling and regional reference adaptation will always be evolving tasks. That’s why we also build tools that let our clients control for those variables: dialectal variations, cultural references, tone, and register. We don’t believe there’s one perfect model that solves everything. Instead, we design with the understanding that human context and cultural logic must remain part of the loop.

4. From your experience, what aspects of culture remain hardest for AI to capture?

The hardest parts for AI to capture are the ones that live between the words, in context, in silence, and in social dynamics. These models are trained to predict sequences of tokens, but they’re not designed to understand what’s left unsaid, what’s implied, or what changes depending on who’s speaking to whom.

Take sarcasm, for instance. It doesn’t work unless the speaker’s intention contradicts their literal words, and that contradiction depends on shared cultural assumptions, not surface grammar. The same goes for indirect speech, especially in cultures where formality, ambiguity, or relational hierarchy shape how meaning is delivered. In Arabic, Japanese, and many other languages, intent is often carried through tone, form, or omission, not just through vocabulary.

AI also struggles with genre shifts: it can’t always tell when a sentence is playful, solemn, ironic, or layered. Humor, especially when gendered or taboo, often collapses into flat phrasing. And even when a model gets the sentence right, it can miss the deeper tension. For example, when a phrase signals resistance, grief, or social trauma that’s only visible to insiders.

So, the short answer is this: AI still fails to recognize power, irony, and the politics of speech, and until we address that, cultural fluency will remain out of reach.

5. How do you measure success beyond correctness, so that a localized version actually ‘feels’ authentic?

We don’t rely on standard mathematical accuracy metrics (BLUE, chrF, COMET, etc.) alone. Those are useful for some benchmarks, but they don’t tell you whether the mapped meaning lands in the target culture. A localization can be grammatically perfect and still alienate the user if the tone feels off, the rhythm sounds mechanical, or the phrasing doesn’t reflect how people actually speak.

Instead, we measure fluency, or what I call “felt authenticity.” That means asking native speakers across different age groups, social classes, and regions whether the localized version sounds like something made by someone from here would actually say? We test not just for intelligibility, but for emotional trust: does it feel like it was written with the right voice, at the right moment, in the right tone?

Sometimes that requires adjusting metaphors, switching idioms, or shortening a phrase that’s too literal. Other times, it means completely rewriting a section to match genre expectations, for example, how a dating app flirts in Brazil versus how it signals safety in Vietnam. We also use these evaluations to train our internal feedback systems and to generate new examples for fine-tuning datasets.

In the end, if someone says, “This doesn’t feel like a translation,” that’s our highest form of validation.

6. Why did you choose to focus on gaming as a starting point for Algebras AI, and what makes it such a compelling space for cultural localization?

The global gaming localization market, if we include voice localization too, is only about $2.5 billion, and it’s actually just one-third of our serviceable obtainable market. But we chose to start there because it’s where AI translation fails the fastest, and where users are quick to notice. 

From a business standpoint, game studios are often the fastest to expand into new regions, and they know localization is a bottleneck. Every first game needs localization. These teams are building for the global market from day one, they budget for localization, and they already understand that culturally fluent products convert better. What they don’t have is the infrastructure to do it fast, well, and at scale.

Games are often translated into dozens of languages and are deeply narrative in structure. That combination creates pressure because if a translation sounds awkward, stiff, or out of place, players will call it out immediately. Games are also emotionally rich; they rely on humor, timing, tone, and subtext to build worlds. So localization isn’t just a technical problem. It’s artistic. It’s cultural. It affects UI, dialogue, lore, pacing, and even font choices. You can’t fake your way through that.

That’s why gaming made sense for us. It gave us everything we needed: real stakes, real urgency, and a market that already understood the problem.

7. What ethics or data rights concerns come with adapting apps for cultures or minority languages?

The central concern is authorship — who creates the data, who defines its structure, and who has the right to decide how it enters machine systems, especially when it comes from communities historically excluded from that decision. 

Low-resource languages are often scraped from public platforms, extracted from volunteer efforts, or treated as unowned simply because they’re online. But these materials are cultural memory and social knowledge. They shouldn’t be treated as open assets.

At Algebras, we do not use any linguistic data without clear attribution and consent. We work directly with native speakers, linguists, and local organizers who understand both the meaning and the cost of representation. Much of our work in minority languages is intentionally open source, developed independently of commercial pipelines, and grounded in long-term public research that we continue to maintain.

We are also collaborating with UNESCO’s AI and Linguistic Diversity Unit, where the focus is on building responsible, community-aligned infrastructure as an urgent global priority.

8. You recently pitched Algebras at the Startup World Cup Cyprus finals. What drew you to the event, and what did you hope to get out of it?

I came to the Startup World Cup in Cyprus because I wanted to test the vision behind Algebras AI outside of the usual AI ecosystem, where I’ve been living for the past years. I’ve pitched to engineers, founders, and linguists before — but this time, I wanted to see how the story lands with investors, cross-border startups, and regional partners who don’t come from the language or AI worlds at all.

Cyprus felt like the right place for that test, both geographically and symbolically. It sits between regions, cultures, and alphabets. While it’s not a dominant market, it understands how global systems bypass anything not built in English. That made it the perfect context to explain why we’re building source-to-source AI instead of another English-first layer.

I wanted to see who would listen when we talked about infrastructure for non-English language economies. And, honestly, the positive feedback after our pitch session was very encouraging.

9. As you expand into more languages and dialects, how do you prioritize which ones come next?

We don’t choose languages based on personal preference or political popularity. We use a triangulation model that combines three factors: market pull, technical feasibility, and linguistic vulnerability. 

First, we ask whether there’s a client, region, or user base actively requesting that language, because real usage matters more than theoretical coverage. Second, we ask whether we have enough clean data and linguistic structure to make the system work at scale, because we don’t release models we can’t trust. Third, we ask whether the language is under-resourced or at risk, because urgency should guide infrastructure. 

10. What role do you want Algebras to play in changing how people interact with global content, especially in spaces where English was once the default?

I want Algebras to make it possible for teams to launch products globally in their own languages, without relying on English, large teams, or expensive custom pipelines.

Our tools should allow a founder in Tashkent or Beirut to build, test, and ship across regions using voice, subtitles, and culturally adapted phrasing that match native expectations without delay or external rework.

We are building the infrastructure that lets global creation start in any language and makes scaling multilingual content as simple as flipping a switch.

Back

Become a Speaker

Become a Speaker

Become a Partner

Subscribe for our weekly newsletter