I was trying to explain the startup I was working on, Ailixr, to my dad. He’s a chemist, old school engineer, so he wasn’t really getting it. “Well, we’re training AI, but not really from scratch… We fine-tune for post-production optimization so it’s suitable for classroom needs, like the industrial design school where I’m teaching—or for whatever domain we’re targeting.”
He said, “So, you sell glass bottles… you fill them with different liquids… then resell them?”
No, dad. I’m not thrilled with that analogy.
I have a lot of mechatronics stuff around, so I pulled out an Arduino. “Maybe think of us like this thing. It’s an Arduino, it’s this modular board that universities and tinkerers use, rather than directly buy components from factories and assemble their own board. Arduino abstracts away all the complicated stuff. You can use it to build a robot, a smart lamp—whatever—without having to reinvent the wheel. We want Ailixr to do the same kind of abstraction for AI.”
My dad thought a bit, and said:
“Oh, so you’re like a Chinese medicine shop: people come in with a problem, and you have all these raw herbs on the shelves. Then your pharmacist weigh out the right combination, package it up, and give them a custom formula.”
Nice one, dad! Staying in your chemistry lane…
We’re curators of a whole bunch of AI “ingredients” (like language models, image generation, computer vision models, etc.), mixing and matching them for specific needs— design, law, sociology, cultural heritage, and more.
AI Industry is the Biotech Industry with no guardrails
(Yes, the users, but I’m talking about to protect the companies.)
Thinking about the “Chinese medicine shop” metaphor took me a layer deeper: AI startups are analogous to biotech startups, but with no regulatory framework to protect them, only to harm them.
The model makers—OpenAI, Google, Meta—are like pharmaceutical giants creating new compounds. They have to pour enormous resources into R&D (i.e., training models) with no guarantee they’ll succeed.
In drug discovery, you might spend years working toward a single compound that treats a certain condition.
In AI, you spend time and money training a model that hopefully exhibits some specific set of capabilities (language understanding, image generation, etc.).
After all that, both pharmaceuticals and AI face a testing phase:
Clinical trials for drugs.
Benchmarks, real-world evaluations for AI.
If results suck, all that work goes down the drain. But if you strike gold—like Novo Nordisk with Ozempic, or OpenAI with ChatGPT—you suddenly define the market. Ozempic became the massive weight-loss phenomenon; ChatGPT became the go-to AI application.
Before ChatGPT, a lot of us (myself included) were doing smaller AI projects, kind of like “mom-and-pop” pharmaceutical shops mixing custom concoctions for local clients. In my own case, I used to provide NLP-based consulting for customer data feedback. Then OpenAI came out with ChatGPT, and I tossed my whole tool belt.
AI Landscape is Even More Cutthroat Than Pharma
Unlike pharma, the AI world doesn’t have the same patent protections or strict regulations. You can’t just open a little lab in your garage and synthesize a new blockbuster drug; biotech is far more regulated and expensive. But in AI?
Open-source development allows rapid collaboration and reverse-engineering.
Anyone can take a cutting-edge model, fine-tune or distill it, and build a competitive variant.
There’s no global framework giving AI developers a 20-year patent or anything like that.
So while pharma can coast on patented drugs, AI is fiercely competitive. Once a great model is out there—especially if it’s open-sourced—rivals can iterate fast.
And now, nations are pouring money into their own AI research, turning this into something bigger than just business competition. It’s political. Everyone wants a seat at the AI table. The market is incredibly fluid.
The Question: “Aren’t You Just a Wrapper?”
I’ve been getting this question about Ailixr often:
“Isn’t that just a fancy way of saying you wrap existing models?”
I love how Perplexity’s CEO handles this question, so I won’t reinvent the wheel.
Model Commoditization – AI models are becoming increasingly commoditized, making it unnecessary to develop proprietary ones.
Massive Capital Requirement – Competing at the model level requires billions in funding, which is only viable for a few companies willing to lose large sums.
Application Layer Differentiation – The real opportunity lies in optimizing AI for consumer experiences, focusing on customization across verticals.
Cost Reduction Trends – The cost of AI APIs is dropping significantly (2x reduction every four months), and open-source models are pressuring proprietary alternatives.
Historical Business Patterns – Many successful businesses are "wrappers" that add value through packaging and distribution rather than core technology (e.g., Coca-Cola wouldn’t have thrived without refrigeration technology).
The recent DeepSeek shock to the American market is not a China win over US, it is an open-source win over proprietary.
That’s a good thing for us. We don’t need to build everything from scratch. We just keep an eye on the open-source advances, evaluate them for our clients’ behalf, and integrate them into their solutions in the most suitable way. Our sweet spot is bridging the gap between raw AI breakthroughs and actual domain specific users need.
And because the competition among model makers is so intense—and so unregulated—progress will keep accelerating.
That means more (and better) “ingredients” for us to mix into the custom formula.
Ailixr is Your AI Compounding Pharmacy
We’re not out to be the next OpenAI or the next big pharma giant. We’re the folks who see all these AI “drug compounds” on the market and say, “Let’s figure out how to combine them, adapt them, and deliver them.”
Like what you read? Help me grow this newsletter!