top of page

Ai Terminology (Part 3)

Welcome back! Welcome to another round of AI terminology. Let's get started:


Inference: This is the process of using a trained AI model to make predictions or decisions on new, unseen data (the "thinking"). It's the model applying what it has learned to the real-world situation or question you sent in your prompt.


Embeddings: The way in which AI represents words or concepts as points in space (but in a database). Embeddings turn text (or images) into numbers that capture meaning and relationships. This is how AI understands that "dog" and "puppy" are more closely related than "dog" and "skyscraper".


Fine-tuning: Think of this as giving an AI model some extra, specialized training. It's taking an already trained model and then retraining it yourself on your specific use case. This way, you can customize a general-purpose AI to become an expert in your particular field, whether that's legal documents, medical research, or dad jokes. ChatGPT allows for fine-tuning, but it's quite a time-consuming and an advanced process for an individual or for small use cases. In those situations, I recommend RAG.


RAG (Retrieval-Augmented Generation): Instead of relying solely on what it learned during training (and avoiding fine-tuning), RAG allows the LLM to look up relevant information from your knowledge base before generating a response. You create a RAG database, upload all your relevant data, documents, info, and then tell the LLM to look at it before answering questions. This technique helps reduce hallucinations and makes AI responses more accurate and up-to-date. This is really cool and relatively easy to do.


Chain of Thought: This is a clever prompting technique (which reduces hallucinations) that's all about breaking down complex problems. Instead of just asking an AI for an answer, you encourage it to "show its work" – to explain its reasoning step by step. Just like when your math teacher insisted you write out all the steps to solve a problem, not just the final answer. This approach often leads to more accurate results, especially for tricky questions that require multiple steps of reasoning.


Agents: Agents are all the buzz – in theory, AI agents have agency, autonomy, they can perform tasks on your behalf, make decisions, and take actions to achieve their goals. Up until now, we interact with AI usually via chat, and the AI is limited to that chat, and can only give you an answer, but can't really do anything beyond that (there are exceptions). However, as AI advances and more Agentic frameworks are released, this will become possible! I've been trying to play around with AutoGen (an Agentic framework) myself.

Comments


bottom of page