top of page

Ai Terminology (Part 2)

Ready for round two of our crash course in AI lingo? Last time, we covered the basics. Now, let's dive a bit deeper into some terms that I believe will start connecting some dots for you and also make you sound like you know what you're talking about.


Narrow AI: This is the kind of AI we're mostly dealing with today. Narrow AI is designed for specific tasks, like recommending movies on Netflix or helping Siri understand your request. It's really good at what it does, but can't draw out of the lines... On the Ai spectrum Narrow Ai is considered "Weak Ai".


Tokens: When it comes to language models, tokens are like the building blocks of text. They can be words, parts of words, or even single characters. When you send a prompt to an LLM, it breaks down your input into tokens to process it. Every LLM breaks down and uses Tokens differently, and when we access LLMs via their API, we usually pay per token.


Tokenization: This is the actual process of chopping up text into those tokens. It's kind of like how we break down sentences into words, except AI does it in a way that makes sense to computers.


Transformer Architecture: This is the breakthrough (from Google) that made modern large language models possible, it's also the "T" in ChatGPT. It's a type of neural network that's really good at understanding context in sequences (like sentences or paragraphs). The key feature is something called "attention," which allows the model to focus on different parts of the input when producing each part of the output. It's kinda like being able to consider an entire sentence while simultanouesly figuring out the meaning of each word in it, instead of just looking at words one by one.


Hallucinations: A hallucination is when a model generates information that's just plain wrong or nonsensical. Also referred to as the AI "lying" or making things up... it has nothing to do with dropping acid. It's like if you asked your friend what color the sky was and they were confident that the answer is yellow. AI hallucinations can be pretty convincing sometimes. I will dig into hallucinations and how to avoid them in a future post.


Prompt Engineering: This is the art (and sometimes science) of crafting the perfect input to get the output you want from an AI model. It's become a crucial skill in the age of ChatGPT and other GenAI tools. Good prompt engineering can be the difference between getting a vague, useless response and a detailed, spot-on answer. There are many schools of thought on this; I recommend giving the LLM as much context and background as possible. Good prompting can help avoid hallucinations. (We can dig into more tips in another post)


Context Window: The AI's short-term memory. It's the amount of tokens that the model can consider at one time. Context windows play a big part in how efficiently an LLM can perform in a chat. The longer the chat, the bigger the context window becomes and the less efficient it becomes. A larger context window means the AI can "remember" more of the conversation or document it's working with, which can lead to more coherent and contextually relevant responses. With each new model release, it seems like the context windows are getting bigger. Staying mindful and aware of the context window can help avoid hallucinations.


Another batch of AI terms demystified...

Comments


bottom of page