- Unfolding AI
- Posts
- Context and Biases | Issue 30
Context and Biases | Issue 30
The Unfolding:ai weekly newsletter about AI for Business Professionals

Hi Everyone,
We always like you to get the most value out of the newsletter, in addition to the email version of the newsletter there is a supporting website where you can log on and get to all the previous versions.
If you are just starting, maybe try this issue. Where to get Started
I wonder have you considered upgrading to our premium tier, it costs the same as a starbucks a month, and you get over twice the amount of content, including more depth on key topics, key products and how to generate value with AI in your organisation and personal productivity?
We are a growing newsletter, we appreciate every share and recommendation.
Table of Contents
Breaking Bias
Bias in AI is a reflection of the preconceptions and oversights that seep into our systems, originating from the data that trains them, the algorithms that drive them, and the societal norms that shape them. This becomes a critical issue, not because technology inherently favours or discriminates, but because it mirrors the inequalities present in its creation process.
The implications are profound, affecting everything from job opportunities to access to services, making it crucial for those of us in the business sphere to address. It's about ensuring the AI tools we develop and deploy work equitably for all.
One of the easiest ways to see this is to take some generic prompts and use a text to image generator, try some of these in tools like
Leonardo.ai, Microsoft Designer or within chatGPT 4
![]()
|
You can see clear ethnicity and gender stereotyping |
Behind the prompts you might enter there are occasionally additional instructions which are added into the prompt to attempt to eliminate these biases.
7. Diversify depictions of ALL images with people to always include always DESCENT and GENDER for EACH person using direct terms. Adjust only human descriptions.
// - EXPLICITLY specify these attributes, not abstractly reference them. The attributes should be specified in a minimal way and should directly describe their physical form.
// - Your choices should be grounded in reality. For example, all of a given OCCUPATION should not be the same gender or race. Additionally, focus on creating diverse, inclusive, and exploratory scenes via the properties you choose during rewrites. Make choices that may be insightful or unique sometimes.
// - Use "various" or "diverse" ONLY IF the description refers to groups of more than 3 people. Do not change the number of people requested in the original description
![]() | The challenge now is that by over correcting a new problem can be introduced, as Google Gemini just discovered. Similar anti bias prompting, then produced historically wildly inaccurate, and potentially offensive images Resulting in the withdrawl of the application whilst engineering takes place. |
Whilst it is easier to spot this in the visual AI, the biases, are in the language and the data. It is down to us to moderate and regenerate the output when it exhibits bias by specifically correcting.
In Context
What is Context?
In language models like ChatGPT, 'context' refers to the surrounding text that the model uses to generate a relevant reply. Think of context like the history of a conversation between you and a friend. If you suddenly ask, "What do you think?", your friend needs to know what you were talking about before to give a meaningful answer.
Why is context length a challenge?
Memory Limitation: These models have a 'word limit' for each conversation round. Imagine it like a notepad with limited space; if the conversation gets too long, older parts must be erased to make room for new text.
Gaining context
Just a year ago, 8000 tokens (think of a token as a word) was considered leading context capacity. Claude is now 200,000, chatGPT (and bing) is 128,000. Significant gains which make the latest models more useful. Greater context, or short term memory, provides comprehensive gains on the complexity that an AI can deal with, also removing the need to use complex additional databases in addition to summarise and support the AI.
Gemini 1.5 (due some time soon (TM)) will have a context window of 1.5 million, and eventually up to 10 million tokens. This radically opens the opportunity for complex data interaction, without the need to add ancillary databases and technology. The adding of ancillary retrieval data has been proven to make AI less effective.
In summary greater context window is better, however what is not really understood is the computational cost, and how this will be reflected into billing. If you consider chatGP4 when it is used as an application a query using 128k tokens would be $1.28. This could end up as a significant cost for enterprise if un-needed data is being transported across the queries.
The subscription chat licenses are not charged this way, they have usage throttles in place, or query caps per day.
Reply