|
|
 |
|
Hey
We have a new feature, the Colorintech Deal of the week!
Yep We'll share with you the best tech deal we can every week
|
|
Check out the AI Podcast version of this newsletter or the Video version on our Socials
This newsletter is free, but if you do want to get us an early black friday treat as a thank you for 275 + editions an grab us one here🎁
Oh and if you missed an edition, you can find it here or this platform, here |
|
|
|
|
|
|
🗞️Diversity and inclusion news🗞️ |
|
|
🔥 The State of Social Mobility in UK Founders — The Spicy Edition
🚨 The founder pipeline leaks talent before fundraising even starts. David and the team at Social Mobility Ventures have put out a report and its insights are punchy
The most dramatic drop-off happens before Pre-Seed:
-
50% of state-school ‘stealth’ founders disappear before Pre-Seed…
-
Meanwhile, private-school founders increase at the same stage.
Translation: The UK startup ecosystem filters founders by wealth before it even filters them by ideas.
🚀 But once state-school founders get in the door… they outperform
The data is wild:
-
State-school founders are 30% more likely to reach Series A+ than their private-school peers.
-
They represent only ~15% of the sample, but 37% of total funding (£10.2B).
-
At Series C, their representation almost doubles compared to Pre-Seed (25% → 42%).
Translation: The problem isn’t performance. It’s permission to start.
💸 The Friends & Family round is a myth… unless you’re privileged
The report makes it brutally clear:
-
Working-class founders are 3× less likely to access friends-and-family capital.
-
When you strip out founders who jump straight to VC, the “friends & family” gap becomes glaringly obvious. (Page 30)
-
State-school founders rely heavily on personal savings, while private-school founders lean more into VC or their networks.…
Translation: “Friends & family” rounds should honestly be renamed “Wealth & Networks” rounds.
💀 The class divide shows up in time-to-commit & time-to-fundraise
-
Only 39% of state-school founders go full-time immediately, compared with 67% of private-school founders.
-
State-school founders take longer to close their first round because they’re raising without warm intros.
Translation: Wealth buys time — which buys optionality — which buys opportunity.
🌍 Almost half of UK founders in the dataset are international
According to page 13:
Translation: The UK is a magnet for global entrepreneurial talent — but less effective at turning its own population into founders.
🎓 Oxbridge isn’t the gatekeeper — but it still pays
State-school founders are more likely to come from Non-Russell Group universities (57%) than anything elite.
But:
Translation: Prestige still works as a capital-raising accelerant — even when talent exists everywhere else.
📉 Founders from working-class backgrounds are dramatically under-represented
-
Only 18% of founders come from working-class or low-income households.
-
Compared to 45% in the UK population. (Page 14)
SMV_State_of_Social_Mobility_20…
Translation: The “founder factory” is overwhelmingly an upper-middle-class machine.
🧩 Insider pathways still determine early access
-
Private-school founders are far more likely to have VC-backed startup experience, giving them an early edge at Pre-Seed. (Page 18)
-
State-school founders have fewer “ecosystem adjacent” CVs.
Translation: The startup ecosystem still behaves like a closed loop: the people who have been inside get to stay inside.
🧠 What founders wish they had is not complicated
Top asks from underrepresented founders (Page 34):
Translation: The barriers are structural, not talent-related — and every one of them is fixable.
🌱 The most important takeaway: State school founders aren’t risky — they’re underfunded
Page 36 lays it out directly:
-
These founders build “long-term, resilient, successful companies.”
-
The UK is simply failing to fund them early enough.
Translation: Backing state-school founders isn’t charity — it’s growth strategy.
📚 Read more: View the report here
|
|
|
|
💼MIT finds AI can already replace 11.7% of the US workforce💼
MIT has dropped a very calm little grenade into the policy world: according to its new Iceberg Index, AI already has the technical capability to replace 11.7% of the US workforce — roughly $1.2 trillion in wages — spanning finance, HR, logistics, healthcare admin, and professional services. And no, that’s not a typo. ❄️
Built with Oak Ridge National Laboratory and fed by a frankly outrageous amount of labour-market data (151 million workers, 923 job types, 32,000 skills, and 13,000 AI tools), the Iceberg Index treats each worker as a “digital twin” with real skills, tasks, and geography. It can model who gets disrupted, where, and how deeply. That’s the part policymakers have been missing — exposure happens long before layoffs arrive. 🧊
report
The big twist? Those headline-grabbing tech layoffs are actually just 2.2% of total wage exposure — the tip of the iceberg. The real disruption is happening under the surface in white-collar admin, finance, coordination, HR, and logistics. Basically: the “safe jobs” aren’t safe. The visible panic is tiny compared to the £1.2T worth of routine tasks AI can already do today. 👀
report
And forget the usual “coastal tech hub” narrative. The Iceberg data shows exposure is everywhere — not just California and New York. In fact, states like South Dakota, Tennessee, North Carolina, and Utah show higher exposure because their administrative and financial sectors are incredibly automatable. MIT found that GDP, income, and unemployment explain less than 5% of this variation — meaning traditional economic indicators are basically useless for spotting AI disruption. 📉
report
One more curveball: states with big manufacturing footprints are about to get blindsided. While politicians keep worrying about robots replacing welders, MIT shows that the real exposure is in the admin and coordination work around manufacturing, which has 10x the automation risk of the shop floor. Ohio, Michigan, and Tennessee: we’re looking at you. 🏭
report
MIT isn’t saying 11.7% of jobs will vanish tomorrow — it’s saying AI can already do those tasks technically, and policymakers should be modelling scenarios now instead of reacting once the layoffs start. Think of the Iceberg dataset as a national “AI hazard map,” except instead of earthquakes it’s routine paperwork getting vaporised. 📊
report
📉 So what?
The headline isn’t “AI kills jobs.” It’s “governments aren’t ready.” The countries (and companies) that win will be the ones that:
-
Stop using GDP and unemployment as proxies for AI readiness (MIT showed they tell you nothing)
-
Build skills-first retraining and mobility pathways
-
Treat administrative and financial work as highest-risk, not lowest
-
Invest in transition strategies before disruption — not after
AI exposure is now a skills problem, not just a tech problem — and skills problems are solvable with the right policy, investment, and training infrastructure. 🚀
📚 Read more:
CNBC coverage: https://www.cnbc.com/2025/11/26/mit-study-finds-ai-can-already-replace-11point7percent-of-us-workforce.html
MIT Project Iceberg Report: https://iceberg.mit.edu/report.pdf
|
|
|
|
|
|
🧠Things that make you go hmmm🧠 |
|
|
|
💸 The End of the Free Lunch (For Real This Time) 💸
If you’ve been happily living off free AI like it’s bottomless brunch, the bill is now arriving at the table. Google and OpenAI are quietly (and not so quietly) tightening the taps on free usage and gearing up to monetise with ads – exactly the combo you’d expect right before things get… less cute. 😬
On the Google side, Gemini 3 Pro has shifted from “you get 5 prompts a day” to the very flexible “Basic access – daily limits may change frequently”. Its fancy image model (Nano Banana Pro) has been bouncing between 2–3 free images a day, with a warning that limits will move around depending on demand. Translation: if you’re not on Pro or Ultra, you’re at the back of the queue when the GPUs start screaming. 🎯
OpenAI is doing the same dance with video. Sora 2 launched with 30 free generations a day; that’s now down to six for non-paying users. The head of Sora literally said, “our gpus are melting” while telling people they can now buy extra generations. Free tier as on-ramp, paid tier as destination: the SaaS classic, now with cinematic B-roll. 🧮
And because subscription money apparently isn’t enough, ChatGPT is also preparing to do what every platform eventually does: run ads in the product itself. A leak from the Android beta shows references to a new ads system, including “search ad” and “search ads carousel” inside ChatGPT. With ~800m weekly users and 2.5bn prompts a day, OpenAI doesn’t just know what you search – it knows what you worry about at 2am. Imagine that ad engine plugged into your prompts rather than your browser. 📺
While consumers get caps and ads, the enterprise flex is shifting too. Salesforce CEO Marc Benioff – who’s been a very loud ChatGPT evangelist – tried Gemini 3 for two hours and publicly tweeted that he’s “not going back”, calling the leap in reasoning, speed, images and video “insane”. When someone running a CRM empire swaps their default model, that’s not just vibes – that’s a signal to every CIO who copies his tech stack by reflex. 🤝
Put it all together and the direction of travel is clear: the “play with everything for free” phase is ending, and we’re moving into the stack-lock-in era. Big platforms pick their preferred model (Gemini, GPT, Claude, etc.), they bundle it into products, and users experience AI less as a neutral public good and more as: “whatever your employer, uni or SaaS tools have licence deals for.” The risk? Access, quality, and rights all start depending on which side of the paywall – and which ecosystem – you land in. 🧑🏾💻
📉 So what?
-
Access is about to become a class issue, not just a tech one. If the good stuff lives behind £20–£200/month subscriptions, students, early-career talent, and smaller orgs get stuck on throttled free tiers while big firms fine-tune on premium models. 💳
-
Expect more “AI, but worse” for free users. Lower caps, slower queues, ad injection, and possibly downgraded models will become the norm to push upgrades – classic “enshittification” but now inside your productivity tools and learning workflows. 🔌
-
Founders need a real AI strategy, not just vibes. If your product quietly leans on “free” API access or assumes infinite cheap inference, now is the time to re-run the numbers – and think carefully about which model ecosystem you’re hitching yourself to. 🧪
-
Workers should treat AI skills like paid software, not a free toy. If tools get paywalled at work, the advantage will go to people who already know how to get the most out of whichever model is in front of them – not just “ChatGPT but on the side while I revise.” 🧱
📚 Read more:
|
|
|
|
✈️ Airbus’ A320 Recall Chaos: When the Sun Becomes a Software Bug
Just when you thought “Mercury retrograde in aerospace” wasn’t a thing, Airbus has issued a recall to more than half of the world’s A320 fleet after a mid-air incident revealed that intense solar radiation can corrupt flight-control data. Yes — the sun is now an aviation cybersecurity threat. ☀️➕✈️ = 😬
The recall affects 6,000 aircraft, with airlines from American to Lufthansa scrambling to patch software, run diagnostics, and, in some cases, perform full hardware replacements. The UK got off lightly — only a few BA short-haul planes need fixes — but elsewhere? Total travel-bedlam. Jetstar cancelled 90 flights. ANA grounded dozens. Avianca says 70% of its fleet is affected and has paused ticket sales. Airports from Sydney to Tokyo are basically group-therapy sessions now. 😭
The root issue sits inside the ELAC (Elevator and Aileron Computer), which turns pilot commands into pitch control. A recent JetBlue flight reportedly suffered a sudden altitude drop linked to corrupted ELAC data after solar exposure — sending 15 passengers to hospital and sending the world’s busiest aircraft family into emergency recall mode. 🌡️
The fix sounds simple — “just revert the software” — until you consider that thousands of planes are scattered across time zones, maintenance shops are already overloaded, and some aircraft need hardware swaps that were definitely not in anyone’s holiday-season planning. Meanwhile, Airbus and Thales are diplomatically pointing at the fine print to clarify whose software, whose spec, and whose cosmic radiation shielding are… whose. 🛰️
This is also happening as Airbus battles huge engine-maintenance backlogs and record global demand. Add the fact that 3,000 A320s were in the air when the notice went out, and the logistics maths becomes the stuff of nightmares. Aviation nerds call this “operational complexity.” Most travellers call it “I’m stuck at an airport eating crisps for dinner.” 🍟
📉 So what?
This recall is a reminder that:
-
Systems we assume are robust can crumble under unexpected edge cases — including literal solar storms. ☀️
-
Safety-critical software is only as strong as the weirdest thing the universe throws at it, and the universe is creative. 🌌
-
Aviation’s digital stack is now so complex that a single corrupted data register can ground half the world’s short-haul flights. 💻
-
This won’t be the last time we see “environmental interference” become a tech failure — climate, solar cycles, and digital dependency are intersecting in new and unpredictable ways. 🌍
-
For digital regulators, engineers and policymakers: this is your Black Mirror episode made real.
📚 Read more:
The Guardian — Airbus issues major A320 recall after mid-air incident grounds planes https://www.theguardian.com/business/2025/nov/28/airbus-issues-major-a320-recall-after-recent-mid-air-incident
|
|
|
|
🧨 HP Just Announced 6,000 Layoffs — And Blamed AI (Again)
Another week, another tech giant insisting that AI made them do it. This time it’s HP, which says it will cut 4,000–6,000 jobs to “accelerate innovation” and “boost productivity.” Translation: AI is cheaper than people, and shareholders want their billion-dollar efficiency gains now, not in fiscal 2028. 💸
CEO Enrique Lores told analysts the layoffs will hit product development, internal operations, and customer support — ironically, the very areas where HP says AI will improve customer satisfaction. It’s giving “let’s fire the team and then wonder why customer experience declined.” 🤔
HP’s explanation leans heavily on the usual buzzwords: “digital transformation,” “platform simplification,” “portfolio optimization.” But underneath the bingo card, this is part of a much bigger corporate trend: AI is the new scapegoat for job cuts companies were planning anyway.
Salesforce’s Marc Benioff admitted it outright (“I need less heads”). Amazon has been accused of using AI investments to justify layoffs while ramping up H-1B recruitment. Meta cut 5% of its workforce to “streamline for AI.” Klarna, Intuit, and Duolingo have replaced entire departments with AI workers or “AI-enhanced roles.”✨
But experts aren’t fully buying the narrative. Wharton’s Peter Cappelli says there’s “very little evidence” that AI is capable of replacing jobs at the scale companies claim. Implementing AI properly is slow, messy, and requires — wait for it — more people, not fewer.
Meanwhile Gartner predicts that 75% of IT work in 2030 will still rely on humans, and the World Economic Forum believes AI will create 78 million more jobs than it destroys globally by the end of the decade.
So yes — AI is transforming work. But companies also know that “AI made us do it” is the perfect PR shield in a cost-cutting year.
📉 So what?
For our community, here’s what matters:
-
AI layoffs are now a narrative device — not a neutral economic indicator. Companies are using AI to justify decisions they’d make in any downturn. 🤖
-
Job displacement is real, but the scale is still unclear. The loudest predictions aren’t always the most honest. 🌪️
-
Technical talent isn’t disappearing — it’s shifting. Roles will look different, but AI-literate workers remain essential. 🔧
-
This reinforces why organisations like Colorintech play a crucial role: preparing underrepresented talent for the next wave of AI-enabled work, not the last one. 🌍
📚 Read more:
Ars Technica — HP plans to save millions by laying off thousands, ramping up AI use https://arstechnica.com/information-technology/2025/11/hp-plans-to-save-millions-by-laying-off-thousands-ramping-up-ai-use/
|
|
|
|
🚓AI and Criminal justice🚓
US telecom giant Securus Technologies has quietly stepped into its Minority Report era 📞, training an AI model on years of recorded prison phone and video calls to detect when crimes are being “contemplated.” One model was trained on seven years of Texas inmate calls alone — all from people who must use the system to contact their families. And yes, inmates are told their calls are recorded… but not that their voices are being fed into an AI system as training data. That’s what advocates call “coercive consent” 🚨.
Securus is now piloting real-time scanning of calls, texts, emails and video chats across US jails and prisons. The AI flags segments it believes might indicate criminal intent, and human agents then pass those snippets to investigators. That might sound like a public safety win, but Securus has already been caught illegally recording attorney–client calls, so handing them predictive surveillance powers feels less like innovation and more like a civil liberties booby trap ⚖️.
The company claims it’s disrupting human trafficking networks and contraband smuggling, though it hasn’t provided a single example linked to this new AI system. Advocates are worried for good reason: predictive policing already performs terribly in the real world, so sticking it in prisons — the most structurally biased spaces in the justice system — is an ethical sinkhole. As ACLU’s Corene Kendrick put it: “Are we going to stop crime before it happens by monitoring every utterance and thought of incarcerated people?” The tech is miles ahead of the law, and not in a good way 🕳️.
Then there’s the money. After a 2024 FCC reform barred telecoms from passing surveillance costs on to inmates, Securus lobbied aggressively to reverse it — and won. Under new rules championed by Trump-appointed FCC leadership, prisons can now charge incarcerated people for the very AI tools being used to monitor them, including data storage, transcription, and model development. So people are literally funding their own surveillance, often at extortionate call rates 💰.
📉 So what?
This isn’t just a prison tech story — it’s a preview of where AI surveillance goes first: toward populations with the least power to object. It signals a future where “behavioural intent detection” becomes normalised inside state systems; where the ethical gap between AI capability and legal safeguards keeps widening; where predictive policing gets a glossy AI rebrand instead of the accountability it needs; and where marginalised communities’ communications are treated as free training data. The uncomfortable UK parallel? As algorithmic policing expands here — from facial recognition to “threat modelling” — the question becomes not what AI can do, but what rights people retain as it rolls out 🧩.
📚 Read more:
MIT Technology Review – An AI model trained on prison phone calls is now being used to surveil inmates https://www.technologyreview.com/2025/12/01/1128591/an-ai-model-trained-on-prison-phone-calls-is-now-being-used-to-surveil-inmates/ 🔗
|
|
|
|
|
|
📈 The tools behind the tech📉
📦Product📦
📏Design📏
👩🏿💻Code👩🏿💻
🏢The business behind the tech🏢
|
|
|
|
|
|
🛍️Tech deal of the week🛍️ |
 |
|
|
|
All image credits to Amazon,
So Black Friday may be over but here is our Tech Deal of the week
Apple 2025 MacBook Air 13-inch Laptop with M4 chip:
Yep 15% off an M4 Mac
16GB Ram,
256GB SSD Storage,
£849
Link here and check out our other deaks
And view our shop with our whole collection here |
|
|
|
|
|
😅Meme/AI video of the week 😅 (the internet can be savage lol) |
 |
|
|
|
🌐Partner Events & Opportunties 🌐 |
|
Below are the top opportunities we want to highlight to you this week! If you want to see more, then check out our new website where we have a whole page dedicated to events and opportunities from us and our partners:
https://www.colorintech.org/events
|
|
|
|
😃Founding Full Stack engineer😃
Neya is hiring a founding engineer to help build the world’s first AI super-neighbour. We're on a mission to build stronger, more connected local communities, by pioneering the development of AI to increase social connection.
Location: London (ideally 2 days per week in person)
Team: Founding Team
Compensation: £75k–£90k + meaningful equity (0.5–2% depending on experience)
Start date: ASAP
Tech stack: LLMs (LangChain), Node (Express), Next.js (React + TypeScript), Supabase, vector embeddings, Native mobile (currently Capacitor), WhatsApp Cloud API
If you want to apply fill out our short form here, telling the team:
- What is your strongest engineering edge?
- What part of Neya’s stack or problem space would you be most excited (or nervous)?
- Why does Neya appeal to you, and especially at this stage?
- About something you’ve built that best represents your qualities as an engineer.
|
|
|
|
🙌Resource launch🙌
We are ecstatic to share that our 2025 International Women's Day Resource is now officially published on the Colorintech website. This resource is designed to empower you with practical steps for personal and professional growth , encouraging you to celebrate your wins and embrace the power of your network. Take some time out of your week to check out our IWD resource and start charting your next steps for advancement!
Also, if you missed the IWD Resource Launch webinar or want to re-watch, the full recording of our "Accelerate Action" webinar is now available! You can hear directly from our phenomenal speakers and resource contributors:
-
@MelissaBlokland (Founder at ZERANOVA)
-
@ElisabethEweka (Founder at ENGRL & Principal Digital Consultant at Hoare Lea)
-
@AjoaAkuamoah (Programme Delivery Lead at the Department for Science Innovation and Technology)
-
@MoniqueCampbell (Strategic Account Executive at Salsify)
They discuss the strategies and personal courage required to navigate unique paths to success. Their insights are the perfect complement to the actionable steps and words of wisdom laid out in the IWD Resource.
Click below to access both the Resource and the Webinar Recording:
[ACCESS RESOURCE & WATCH RECORDING]
We’d like to give a huge thank you to our webinar panelists and every individual behind the scenes who poured their expertise and time into making the 2025 International Women's Day Resource and its launch event a massive success.
|
|
|
|
|
|
🙌🏾The latest from the Colorintech team🙌🏾 |
|
|
|
|
|
|