top of page
Writer's pictureBatiste Roger

What's Layer 2 Generative AI ?

Updated: Aug 22

Insights from Batiste Roger, CTO of Odonatech.


This is the question I get asked most often when I pitch: What is Layer 2 Generative AI? 🤯



It's true that while banks are cautiously trying out Mistral AI and Llama 3 (and rightly so), at Odonatech, we are already preparing for the next step.

What better time than summer to finally take the time to talk to you about it?



In this post: 


  1. You will encounter difficulties when adopting Generative AI

  2. We found plenty of solutions, and combined, we call it a "layer 2"

  3. Maybe the best thing is to work with us (and why?)


The difficulties of adopting generative AI, in reality


Banks and fintechs would love to incorporate generative AI into their services, whether internally or customer-facing. The potential is enormous.


The problem is, it's much more complicated than it seems:


🤯 AIs are not reliable (e.g., they might say the Livret A is at 2%)

🤪 AIs are not deterministic (so how do you test them?)

🧠 AIs require prompts... which need continuous improvement

😇 AIs need to hand over to humans at the right moment

🤓 AIs need knowledge (cf. RAG), not so simple to configure

🧩 AIs are in free text (unstructured data), difficult to integrate into the CRM


And we could continue this list for a while...


🔍 Prompt n.m., instructions given to a generative AI to indicate the expected behavior.


In short, Layer 1 AIs (like ChatGPT , Anthropic Claude, Google Gemini, Llama, ...) are like a jackhammer: useful but a bit tricky to use.


To put a real use case into "production," you quickly find yourself assembling a whole team with data scientists (fine-tuning, RAG, ...), botmasters (prompts, user feedback), interfaces (conversation tracking, tagging), ... Expensive, time-consuming, and potentially ephemeral 😥


🔍  Fine-tuning n.m., the activity of adjusting the training of an existing AI, generally to specialize it, by showing it new data.


🔍 RAG n.m., architecture allowing a generative AI to benefit from the knowledge of a database. The principle is to perform a search in the database (based on the user's message), then include the result of this search in the AI's prompt, allowing it to "know" the relevant content of the database.


😭😭😭 Well, if that were all, it would be okay.


If you do (like us) financial consulting, you have calculations to make (discounting, interest rates, cost of credit, ...). Would you want ChatGPT to do them, at the risk of making mistakes? Of course not. So you'll use your existing simulators. But how do you integrate them into the same chat (in natural language), which combines ChatGPT and your simulators?


Pretty quickly, you find yourself writing a lot of code and combining classic code, classic AI, and generative AI. If you find tricks, you can have the best of all three worlds.


And it's not over. Then comes the regulatory question. You find yourself building monitoring tools and protections to ensure the AI doesn't say anything forbidden. It's not over. Then comes the question of cybersecurity. What if someone tries to steal your prompts or derail your AI (risk of notoriety)? You're off to code an additional layer of defense (no, the ones provided by the layer 1s are not enough).


Okay, you succeeded? What do your users say about their new interface? I imagine they identified flaws, but also new ideas. You must therefore update your prompts. And since Claude has become more efficient than ChatGPT, or vice versa, you change the Layer 1 AI. But changing the Layer 1 AI requires updating the prompts, as they don't react exactly the same way. After making all these changes, how do you know if your assistant is better or worse than before?


Suppose it goes well. Is the bot sufficiently performing to meet the "job to be done," i.e., to do what the user expects? For us, the answer was initially no: the AI did not take emotions into account enough to provide relevant financial advice. You can laugh about it during a test, but telling a client who just inherited that you're happy for them is completely inappropriate! You need to develop a specific skill for managing emotions, optimized for financial topics. How to do it? Fine-tuning? RAG? AI or not AI? A bit of everything?


There, it's done, now to test! But how? We won't do hours of manual testing by hand with every change. Yet, knowing that the AI is random, a single test per version is not enough. So you find yourself using OpenAI Evals or Inspect. It's great, but it quickly becomes quite heavy.


🔍 Evals and Inspect are tools that allow you to evaluate the performance of a generative AI. They notably have the difficult task of helping us overcome the randomness of AIs and the fact that they express themselves in free text. Therefore, more challenging to achieve than a classic automatic testing software.


Our Layer 2 Generative AI solutions, the precious fruit of our clever and sharp team


If part 1 depressed you, this one will make you smile!


Here's what we've created:


  • LiLa Core: the brain of LiLa. A true web of interconnected specialized modules, forming an intelligent AI from Layer 1 AIs (and other deterministic code blocks, of course). LiLa Core extends the performance of the RAG architecture, calls APIs, business calculators, and has emotion management capabilities as well as real financial consulting expertise (structured, regulatory, up-to-date, attentive). And I haven't even told you everything (our IP consultant is on vacation, I'm not taking any risks)! In short, LiLa Core is the center of "Layer 2."


  • LiLa Ali: our interfaces for advisors and/or marketing services: lead management, follow-up, marketing automation, ... This is what brings business opportunities to our clients, so naturally, it's the center of our ROI!


  • LiLa Quality: a suite of interfaces for monitoring conversations, message quality, and everything related to compliance and security.


  • LiLa Designer: a suite of interfaces for parameterization, prompt updates, and testing. In short, everything that makes a project take us a few days to do what normally takes a few months 🤩


And all of this is independent of any particular Layer 1. It's an overlay. We can do it on top of any generative AI, or group of generative AIs (they don't have the same strengths, of course, we can combine several!).


An overlay that works very well for hundreds of use cases related to financial consulting, banking, insurance, and similar activities (e.g., lead generation for a fintech, helping employees with the PEE, specialized crypto coach, ...).


Of course, each use case involves customizing LiLa... but the same is true for Layer 1s, with much lower results!


The interest in working with us


We are few in number, and we do better than OpenAI and Mistral AI? That's normal, we are not their competitors, but their clients!


We don't do Big Data (at least, not as Big). We don't have gigantic, expensive data centers, R&D teams specialized in LLM training techniques, or partnerships with Le Monde to use their data. All of that, the Layer 1s do for us.


What we do is essentially two things:


🔭 We know how to create intelligent AIs from limited AIs (the famous Layer 1s)


🙏 We know how to support you in successfully implementing a use case, whether in terms of (numerous) tools or specific skills.


We are financial professionals, which means we know how to create the bridge between the generalist technology of Layer 1s and the specific needs of bankers.


Layer 1s don't have our Layer 2 AI engine (more intelligent, more reliable, more subtle), nor the specialized support in your sector. Consulting firms only have the latter (by the way, we are open to collaborations). In short, if you want to succeed in a generative AI project in finance, I think you're reading the right LinkedIn post



Conclusion


Layer 2 AIs are more intelligent and more specialized than Layer 1 AIs.


In the future, finance companies will not work directly with OpenAI, Kyutai , or Mistral AI, but with actors like Odonatech.

The same will be true in every industry, which will see the birth of its own Layer 2 actors. This strategic observation, which extends the value chain of generative AI, naturally follows from our practical experience. There is a real business and technical complexity related to generative AIs, as well as, of course, to financial consulting. Experts capable of linking these two worlds are needed.


I hope you enjoyed this post and that it helped you discover or clarify the concept of Layer 2 🤓. I also hope it will make you want to learn more about actors like Odonatech. 👭


PS: For investors looking to diversify their portfolio, consider Layer 1 and Layer 2. No one knows exactly how the value will be distributed between these two types of actors, but my gut tells me that Layer 2 is very underestimated. Stay ahead of the curve, form your own opinion !




留言


bottom of page