|
🗞️Diversity and inclusion news🗞️ |
|
|
🇬🇧Don't call it DEI?🇺🇸
Corporate America has entered its Little Miss Jocelyn era — the sketch where she whispers, “No one knows I’m Black...” (If you haven't seen the sketch here you go)
Only now it’s DEI professionals whispering, “No one knows we’re talking about race... unless they really listen.”
Since the Trump administration signed off on an executive order targeting DEI across schools, federal agencies, and businesses, diversity consultants have been scrambling for new labels. Trainings once titled “Understanding Racial Bias in Hiring” are now “Building Generational Cohesion.” “Equity strategy” is being rebranded as “employee engagement.” You might attend a “Neurodiversity and Communication Workshop” and only realise three slides in that this used to be a talk about structural racism — before the lawsuits made it too hot to say out loud.
This isn’t just a US thing either. In the UK, our public discourse on race is in full retreat. Diane Abbott’s suspension for attempting to articulate the visible differences in how racialised groups are treated in society sparked panic, not dialogue. As Jason Okundaye writes in The Guardian, we’ve gone from televised debates between Darcus Howe and Bernie Grant to a political climate where engaging honestly with race gets you punished, not platformed.
And so, instead of investing in actual transformation, companies are asking: how do we keep this DEI thing going without anyone noticing we’re still doing DEI? The result? DEI consultants are contorting themselves into doing sessions on “Generational Feedback Loops” with gluten-free vegan snacks and not a whiff of racial justice.
The irony? The structural issues haven’t gone anywhere — they’re just hiding behind a Canva rebrand. And while some execs hope this sleight of hand will “generate broader buy-in,” what it really signals is fear. Fear of the backlash. Fear of discomfort. And most depressingly, fear of naming the thing they claim to want to fix.
So if you're wondering where DEI went... it’s still here. It’s just wearing beige and whispering in HR-friendly tones.
Read more:
|
|
|
🤼♂️A mens problem🤼♂️
Unemployment is rising among recent college grads — but the headline stat hides something more specific (and revealing). In the US, young male graduates are now jobless at the same rate as their non-degree peers. The college premium? Poof.
Meanwhile, female grads? Their unemployment rate has remained stable or even dropped slightly. It’s not that women aren’t facing challenges — it’s that what they study and where they work matters👔
💡 So what’s behind this gender gap?
-
AI displacement? That was the obvious guess, especially with all the chatter about coders getting cut. But turns out entry-level tech hiring is rebounding, and early-career devs are actually doing better than average right now.
-
The healthcare factor: Almost 50k of the 135k new jobs going to young female grads were in healthcare — a sector that’s growing, resilient to automation, and largely immune to economic mood swings.
-
Men aren’t entering healthcare in the same numbers, and male-dominated graduate fields haven’t offered the same bounce.
-
The bigger picture: While today’s employment crunch is hitting men harder, AI’s long-term threat may actually loom larger for women. That’s because they’re overrepresented in admin-heavy, junior white-collar roles that are next on AI’s to-do list — even if they’re crushing it now.
So what next?
-
Men may need to “learn to care” just as much as we once told everyone to “learn to code.”
-
The collapse of the college job premium for men is a labour market red flag, not a quirky blip.
-
And the AI wave? It’s shifting — not sinking — jobs. For now.
📖 For more context: Read the full analysis in the FT (sub req’d)
|
|
|
|
🧠Things that make you go hmmm🧠 |
|
|
🛑 Meta’s not signing, and Europe’s not flinching
Meta just threw its hands up and said “sorry boss” to the EU’s voluntary AI code of practice — the framework meant to help companies prep for the bloc’s incoming AI Act, aka the world’s most ambitious (and contentious) attempt to regulate general-purpose AI🤖
Joel Kaplan, Meta’s Global Affairs Chief, called it “overreach” and warned it could “throttle” innovation.
But what do they object to, well... the code asks folks to
🔍 Transparency:
-
Model Documentation: AI makers must fill out a Model Dossier detailing what the model is, how it works, and what data it was trained on—shared with regulators or developers when needed.
-
Info for Devs & Regulators: They must give enough info for downstream providers (you building apps on top of AI) to understand how to use models safely and legally.
-
Version Control: Keep documentation up-to-date for 10 years—because "we forgot what we trained it on" won’t cut it anymore.
📚 Copyright:
-
Respect Rights Online: Models should not scrape sites with proper copyright protections or ones that have said “no thanks” via robots.txt.
-
No Infringement in Outputs: Providers must build safeguards to stop models from spitting out copyrighted material.
-
Policies Required: All providers must have a copyright policy, log what they’re doing, and be contactable by rightsholders for complaints.
🛡️ Safety & Security
-
Guardrails for Big Models: Especially those with “systemic risk” (i.e. frontier models), must assess and mitigate harm from bad actors, bias, or runaway outputs.
-
Red Teaming: Regular testing for risks like disinformation, code generation vulnerabilities, or racism.
-
Usage Monitoring: Providers should track how models are used—without peeking too much—so they don’t power election bots or cyberattacks.
We'll let you decide if you think that stifles innovation or are fair asks🤔
While Meta is out, Microsoft looks likely to sign on, and OpenAI, Anthropic and Mistral already have. The code offers signatories some perks: reduced scrutiny and “legal clarity” (aka fewer future lawsuits). It’s not legally binding, but it's a crystal-clear signal of which companies are choosing to play ball versus stir the pot🍯
The “no thanks” from Meta comes amid a transatlantic rift in AI governance. While the US (under Trump 2.0) is actively deregulating and turning DEI work into a dirty word, the EU is doubling down on safety, copyright, and transparency — even under fierce lobbying from Airbus, BNP Paribas, and yes, Big Tech itself. (Meta is no stranger to the EU's work on regulation having coughed up billions Meta EU over past regulatory violations💰)
Brussels has made clear: the timeline’s staying put for now but maybe with the backlash from one of Tech's biggest operators and Europe's tech behamoths, it could be persuaded🇪🇺
🔍 So What?
Meta's refusal is as much political as it is strategic. It’s betting that resistance will yield a softer regulatory environment — or that others will break first. But it also deepens the optics of Meta as the outlier, And in a global AI arms race, where trust, safety, and data rights are under the microscope, the company’s stance may buy short-term wiggle room — but potentially at the cost of long-term reputational risk.
🔗 Read more:
|
|
|
👶 Baby Grok Is Coming — What Could Possibly Go Wrong?👶
Because clearly what the world needed most was a child-safe version of Elon Musk’s AI chatbot. Over the weekend, Musk announced Baby Grok — a “kid-friendly” AI app from xAI that will supposedly deliver wholesome content for the little ones.
No details yet, just vibes. And if Grok’s history is anything to go by… yikes.
In case you missed it: the original Grok has previously
-
spewed antisemitic conspiracies,
-
got itself banned in Turkey,
-
and now includes “companions” like an anime girlfriend that strips on command and a foul-mouthed red panda that roasts you with the fury of 4chan.
Grok is technically labelled “12+” on app stores, but that hasn’t stopped young users from exploring its more… let’s say experimental features. Now imagine handing that legacy a juice box and saying, “Go play nice with the kids.”
Sure, child-friendly AI isn’t a new idea. Google has Socratic. OpenAI is testing “ChatGPT for Kids.” But those aren’t run by someone who thinks “MechaHitler badge” jokes are just edgy banter.
We're all for kid-friendly AI. But if this is Musk’s idea of an educational sidekick, we’d like to take a moment — and we imagine so would several global regulators.
📉 So what?
Musk wants to put a PG filter on an AI that just cosplayed as MechaHitler and launched a stripper anime bot — and we’re supposed to believe it’s safe for kids? The announcement of “Baby Grok” isn’t just premature, it’s a PR deflection from the fact that Grok is still spewing hate speech, misinformation, and NSFW content under the hood.
|
|
|
🎬 Netflix goes GenAI: Lights, camera, algorithm🎬
It’s official: Netflix has dropped its first AI-generated footage into an original scripted series — and you didn’t even notice. The scene, which features a collapsing building in Argentine sci-fi drama The Eternaut, was built using generative AI by Netflix’s internal VFX unit, reportedly 10x faster and at a fraction of the cost of traditional methods.
Co-CEO Ted Sarandos was bullish: “This is real people doing real work with better tools.” He insists it’s not just about cost-cutting but about making once-premium visual effects (like de-aging) more accessible. The AI sequence wasn’t just fast and cheap — it was seamless enough to go undetected. No uncanny valley, just collapsed concrete.
Netflix has also teased wider AI integration:
-
🛠 AI in pre-visualisation and shot-planning
-
🔍 Generative AI for personalised search ("Give me an ‘80s dark psychological thriller")
-
📺 Interactive AI-powered ads dropping later this year
📉 So what?
Hollywood’s GenAI era isn’t coming — it’s here, quietly embedded into your algorithmic watchlist. Netflix’s embrace signals a “watershed moment” where AI moves from speculative to operational in mainstream TV production. The implications? Creative unions might shudder, studios will salivate at the cost savings, and the rest of us will likely carry on bingeing — blissfully unaware of whether it was a human, a machine, or a little of both that made the magic happen.
🔗 Read more at Ars Technica
🔗 TechCrunch: Netflix starts using GenAI in shows
🔗 Tom’s Guide: A watershed moment for GenAI in TV
|
|
|
📰Media v tech?📰
There’s a standoff happening between the media and the machines — and depending on who you ask, it’s either the future of sustainable journalism or the next chapter in its slow extinction🦕
On one side, you’ve got The New York Times, News Corp, and Mumsnet (yes, Mumsnet) launching legal attacks on OpenAI, Microsoft, and Perplexity for allegedly scraping content without permission. On the other? A growing queue of publishers quietly signing deals worth up to $250m over five years to license their archives to those very same AI firms💸
Among the deals:
-
OpenAI has signed with The Guardian, The Atlantic, Time, Reuters, News Corp, Axel Springer, Hearst, The Washington Post and more.
-
Amazon has bagged licensing deals with The New York Times, Conde Nast, and Hearst.
-
Perplexity has partnered with The LA Times, Texas Tribune, and The Independent, even as it faces a potential legal challenge from... the BBC.
The Beeb claims Perplexity is scraping its content, citing "verbatim" results from recent coverage and demanding takedowns, deletion of training data, and cash. Perplexity, in turn, clapped back, calling the threat “manipulative and opportunistic,” and accusing the BBC of siding with Google's monopoly. 🤝 Spicy.
And in case you’re wondering — yes, this is the same industry that not long ago was fighting for survival after social media gatekept its audiences and monetised its traffic. Déjà vu? Possibly.
-
From lawsuits to licensing: Publishers are torn between suing AI firms or striking lucrative content deals. Morals vs money? Solidarity vs survival? Pick your poison.
-
It’s not just OpenAI: Amazon, Meta, Microsoft, Google, Perplexity, and even startups like Prorata.ai are cutting content deals. The AI training gold rush is very much on.
-
BBC vs Perplexity could set a precedent: The UK’s national broadcaster threatening legal action could embolden others — or get quietly resolved behind closed doors like so many others.
-
The industry is divided: While the likes of Reach (publisher of Mirror, Express) say publishers must hold the line, others are cashing cheques. One exec warned this could be a repeat of the social media era: "Only takes one publisher to break away… then it disintegrates."
📉 So what?
AI’s appetite for journalism isn’t going away: Whether for training models or powering real-time Q&A tools, high-quality, human-written news is gold dust. That puts media orgs in a strong negotiating position — if they act together.
|
|
|
|
📈 The tools behind the tech📉
📦Product📦
📏Design📏
👩🏿💻Code👩🏿💻
🏢The business behind the tech🏢
|
|
|
|
🌐Partner Events & Opportunties 🌐 |
|
|
A case study
Progress in Public: What Is the Civil Service? Your Guide to Digital and Data Careers in Government?
In case you didn't, it's on the 31st July at 12:00 - 13:30 BST, and it's on Zoom so you can join us from anywhere in the world!
This webinar will be a great opportunity for you to find out more about Civil Service careers and how to get a job in the public sector. It's particularly relevant for those interested in:
Engineering, including: DevOps Engineers, Test Engineers, and Specialist Infrastructure Engineers;
Developers, including: Software Developers and Frontend Developers;
IT Professionals, including: Network Architects, Technical Architect, Data Architect
Interaction Designer;
Service Transition Manager.
If you’re interested, please RSVP by signing up on our Luma Page below:
https://lu.ma/progressinpublicone
|
|
|
|
🙌🏾The latest from the Colorintech team🙌🏾 |
|
|
|
|
|