Saturday, September 13, 2025

What Does Adding AI To Your Product Even Mean?

Introduction

I have been asked this question multiple times: My management sent out a directive to all teams to add AI to the product. But I have no idea what that means ?


In this blog I discuss what adding AI actually entails, moving beyond the hype to practical applications and what are some things you might try.

At its core, adding AI to a product means using an AI model, either the more popular large language model (LLM) or a traditional ML model to either 

  • predict answers 
  • generate new data - text, image , audio etc

The effect of that is it enable the product to

  • do a better job of responding to queries
  • automate repetitive tasks
  • personalize responses
  • extract insights
  • Reduce manual labor

It's about making your product smarter, more efficient, and more valuable by giving it capabilities it didn't have before.

Any domain where there is a huge domain of published knowledge (programming, healthcare) or vast quantities of data (e-commerce, financial services, health, manufacturing etc), too large for the human brain to comprehend, AI has a place and will outperform what we currently do.


So how do you go about adding AI ?

Thanks to social media, AI has developed the aura of being super-complicated. But if reality, if you use off the shelf models, it is not that hard. Training models is hard. But 97% of us, will never have to do it. Below is a simple 5 step approach to adding AI to your system.

1. Requirements

It is really important that you nail down the requirement before proceeding any further. What task is being automated ? What questions are you attempting to answer ?

The AI solution will need to evaluated against this requirement. Not once or twice but on a continuous basis.

2. Model

Pick a model.

The recent explosion of interest in AI is largely due to Large Language Models (LLMs) like ChatGPT. At its core, the LLM is a text prediction engine. Give it some text and it will give you text that likely to follow.

But beyond text generation, LLMs have been been trained with a lot of published digital data and they retain associations between text. On top of it, they are trained with real world examples of questions and answers. For example, the reason they do such a good job at generating "programming code" is because they are trained with real source code from github repositories.

What model to use ?

The choices are:

  • Commercial LLMs like ChatGpt, Claude, Gemini etc
  • Open source LLMs like Llama, Mistral, DeepSeek etc
  • Traditional ML models
Choosing the right model can make a difference to the results. There might be a model specially tuned for your problem domain.

Cost, latency and accuracy are some parameters that are used to evaluate models.

3. Agent

Develop one or more agents.

Agent is the modern evolution of a service.  Agent is the glue that ties the AI model to the rest of your system. 

The Agent is the orchestration layer that:
  • Accepts requests either from a UI or another service
  • Makes requests to the model on behalf of your system
  • Makes multiple API calls to  systems to fetch data
  • May search the internet
  • May save state to a database at various times
  • In the end, returns a response or start some process to finish a task
It is unlikely that you will develop a model. But it is very likely that you will develop one or more agents.

4. Data pipeline

Bring your data.

A generic AI model can only do so much. Even without additional training, just adding your data to the prompts can yield better results.

The data pipeline is what makes the data in your databases, logs, ticket systems, github, Jira etc available to the models and agents.

  • get the data from source
  • clean it
  • format it
  • transform it
  • use it in either prompts or to further train the model

5. Monitoring

Monitor, tune, refine.

Lastly you need to continuously monitor results to ensure quality. LLMs are known to hallucinate and even drift. When the results are not good, your will try tweaking the prompt data, model parameters among other things.

Now let us seem how these concepts translate into some very simple real-world applications across different industries.


Examples

1. Healthcare: Enhancing Diagnostics and Patient Experience

Adding AI can mean:

  • Personalized Treatment Pathways: An AI Agent can analyze vast amounts of research papers, clinical trial data, and individual patient responses to suggest the most effective treatment plan tailored to a specific patient's profile.

    • Example: For a person with high cholesterol, an AI agent can come up with a personalized diet and exercise plan.


2. Finance: Personalized Investing

Adding AI could mean:

  • Personalized Financial Advice: Here, an AI Agent can serve as a "advisor" to offer highly tailored investment portfolios and financial planning advice.

    • Example: A banking app's AI agent uses an LLM to understand your financial goals and then uses its "tools" to connect to your accounts, pull real-time market data, and recommend trades on your behalf. It can then use its LLM to explain in simple terms why it made a specific trade or rebalanced your portfolio.


3. E-commerce: Customer Experience

Adding AI could mean:

  • Personalized shopping: AI models can find the right product at the right price with the right characteristics for user requirement

    • Example: Instead of me shopping and comparing for hours, AI does it for me and makes a recommendation on the final product to purchase.


In Conclusion

Adding AI to your product to make it better means using the proven power of AI models

  • To better answer customer request with insights
  • To automate repetitive time consuming task
  • To make predictions that were hard earlier
  • To gain insights into vast bodies of knowledge 
The tools are there. But to get results you need discipline, patience and process.

Start small. Focus on one specific business problem you want to solve, and build from there.


Saturday, September 6, 2025

CRDT Tutorial: Conflict Free Replication Data Types

Have you ever wondered how Google docs, Figma, Notion provide real time collaborative editing?

The challenge is : What happens when 2 users edit the same part of the document at the same time. 

  • User A at position 5: types X
  • User B at position 5: types Y

This is a concurrency problem. A traditional implementation would need to lock the document to handle this. But that would destroy real-time responsiveness. There is a need to automatically resolve conflicts so that every one ends up with same document state.

In Google docs, CRDTs  are used to handle concurrent text edits, ensuring that if users insert text at the same position, the system is able to resolve the order without conflicts.





What is a CRDT?

CRDT stands for conflict free replication data type.

A CRDT is a specially designed data structure for distributed systems that:

  • Can be replicated across multiple nodes or regions.

  • Allows each replica to be updated independently and concurrently (without locks or central coordination).

  • Guarantees that all replicas will converge to the same state eventually, without conflicts, even if updates are applied in different orders.

Why do we need CRDTs?

In collaborative editing (like Google Docs, Notion, Figma):

  • Many users may edit the same document concurrently.

  • Network latency or partitions mean updates may arrive in different orders at different servers.

  • We can’t just “last-write-wins” — that would lose user edits.

  • We want low-latency local edits (user sees their change immediately), with eventual consistency across the system.

  • Typical in distributed systems

CRDTs give us a way to allow users to edit locally first and let the system reconcile changes without central locks.

Types of CRDTs

There are two broad families:

  1. State-based (Convergent CRDTs, CvRDTs)

    • Each replica occasionally sends its full state to others.

    • Merging = applying a mathematical "join" function (e.g., union, max).

  2. Operation-based (Commutative CRDTs, CmRDTs)

    • Each replica sends only the operations performed (e.g., "insert X at position 2").

    • These operations are designed so that applying them in any order yields the same final result.

Examples of CRDTs in Practice

  • G-Counter (Grow-only counter): Each replica increments a local counter, merge = element-wise max.

  • PN-Counter (Positive-Negative counter): Like G-counter, but supports increment & decrement.

  • G-Set (Grow-only set): Only supports adding elements.

  • OR-Set (Observed-Remove set): Supports add & remove without ambiguity.

  • RGA (Replicated Growable Array) or WOOT or LSEQ: For collaborative text editing, where inserts/deletes happen at positions in a string.

These are the basis for how real-time editors like Google Docs or Figma handle concurrent text/graphic editing.

Below is a simplistic Java implementation of a CRDT:

https://github.com/mdkhanga/blog-code/tree/master/general/src/main/java/com/mj/crdt

The code above provides a simple implementation of a G-counter that supports insert, update, delete and merges replicas by taking the maximum value for each node. It is a starting point to understand how CRDTs ensure convergence in distributed systems.

CRDT vs. Centralized Coordination

  • If concurrent editing is rare → a simple centralized lock/version check may be enough (like your first idea).

  • If concurrent editing is common (e.g., Figma boards with dozens of people) → you want CRDTs  to avoid merge conflicts.

In short:

A CRDT is a mathematically designed data structure that ensures all replicas in a distributed system converge to the same state without conflicts — perfect for real-time collaborative editing.

Note that this would be needed only for collaborative editing at scale in distributed systems. For anything else, it could be an overkill.