|
|
|
🗞️Diversity and inclusion news🗞️ |
|
|
|
🎓 Think Landing a Job Is Hard? Try Having 'DEI' on Your Resume 🍱
Once upon a time, having “Diversity, Equity & Inclusion” on your LinkedIn was the fastest way to get headhunted. Now, it’s the fastest way to get ghosted. (Bloomberg)
Bloomberg’s latest piece paints a grim picture: DEI professionals — once courted by Fortune 500s and congratulated at award ceremonies — are now quietly scrubbing those three letters from their résumés. Recruiters have gone cold. Interviews vanish mid-process. Entire departments have been “reframed” into generic HR roles. The vibe shift is real.
🧩 What’s happening
-
After Trump’s crusade against “illegal DEI,” many US firms have downsized or dismantled diversity teams, citing fears of lawsuits or lost government contracts.
-
Job postings for DEI roles have halved since 2019, per Revelio Labs, after a pandemic-era boom that saw listings nearly quadruple.
-
Some professionals have resorted to euphemisms — “culture strategy,” “people experience,” or the classic “organizational excellence” — to stay employable.
-
Others are leaving the field altogether, pivoting to consulting or academia. One DEI leader told Bloomberg it’s “hard to recall any skillset becoming obsolete so quickly and completely.”
🫥 The human cost
Behind the stats are people like David Daniels IV, who lost out on a recruiting role after his DEI background came up during a reference check, or Josue Mendez, whose job interview “suddenly went cold” the moment he mentioned diversity work. The fear is palpable: professionals report being openly avoided at networking events, as if “DEI” were contagious.
And yet, companies still love the optics. Many have simply rebranded their commitments — pivoting from “racial equity” to “veteran inclusion” or “belonging initiatives.” Same PowerPoint slides, different stock photos.
After years of glossy corporate pledges, the DEI bubble has burst — and the people who built it are the ones paying the price. The irony? These were the same experts who helped companies navigate crises in 2020; now they’re being cut to “de-risk” the business.
It’s a harsh reminder that corporate “values” often end where legal risk begins. But as one laid-off practitioner put it, “America has always been this way.”
📖 Read more: Bloomberg – Think Landing a Job Is Hard? Try Having 'DEI' on Your Resume |
|
|
|
🎓 “Does seeing Black and Asian people on TV make you feel mad?
Yep some folks do feel that way. Racism we may say. Guess who doesn't think it is. Read more here
|
|
|
|
|
|
🧠Things that make you go hmmm🧠 |
|
|
|
📰Grokipedia📰
Want to see the world the way Elon Musk does? Well probably not, but if you do, now you can given he's launched and what appears to have largely scraped wikipedia and then used his own preferences to re write history the way he wants to tell it and aptly called it Grokipedia.
Yep new AI-powered Wikipedia competitor falsely claims that pornography worsened the AIDS epidemic and that social media may be fueling a rise in transgender people but hey if you they say history was written by the victors so we assume this is his attempt to "win"
In case you missed what happened Axios broke it down well
Elon Musk's Grokipedia launched on Monday, bucrashed shortly after. He launched it after complained about "propaganda" on Wikipedia,
Many entries closely resemble their Wikipedia counterparts, but Musk's right-leaning approach shapes how some topics are framed. Musk's own Grokipedia page features a "Recognition and Long-Term Vision" section, compared to Wikipedia's "Accolades."
- "His long-term vision prioritizes safeguarding human consciousness against existential threats, emphasizing the establishment of a self-sustaining multi-planetary civilization as a hedge against planetary-scale catastrophes on Earth," the page states.
- A page on former President Biden highlights facts about his policies and historical events, along with what Grokipedia calls "severe empirical setbacks" during his presidency — a significant language difference from his Wikipedia entry.
Axios reports, on The other side: Wikipedia co-founder Jimmy Wales pushed back on Musk's criticism after the billionaire objected to how his Nazi salute allegations were described on the search engine earlier this year.
- "Is there anything you consider inaccurate in that description? Wales said on X in January. "It's true you did the gesture (twice) and that people did compare it to a Nazi salute (many people) and it's true that you denied it had any meaning."
Read more on Axios here
📉 So what?
Who would have thought this would be a time you'd rather trust a group of anonymous folks writing on the internet then someone you can actually attribute. But hey this is 2025
|
|
|
|
☁️Google and anthropic☁️
Anthropic — the world’s most expensive “scrappy startup” — just inked a tens-of-billions deal with Google to rent a million of its custom AI chips. The idea? Power up Claude (its ChatGPT rival) and maybe, finally, get enough compute to stop playing catch-up with OpenAI📈
Google, of course, already owns a chunk of Anthropic — so this is less a partnership and more a financial ouroboros: Big Tech investing in itself, selling to itself, and congratulating itself on the progress. Somewhere, an antitrust lawyer just woke up in a cold sweat 😰
Anthropic gets access to Google’s Tensor Processing Units (TPUs) — bespoke chips built to rival Nvidia’s increasingly unobtainable GPUs. Google gets a marquee customer to show off its cloud muscle (and quietly edge into Amazon’s territory). Everyone else gets a headache trying to track who’s supplying whom.
For context:
-
Anthropic’s also plugged into Amazon’s Trainium chips and Nvidia GPUs, because in 2025, true innovation means keeping all your sugar daddies happy👩🏻💻
-
Google’s promising over a gigawatt of compute next year — roughly what it takes to power a small city or train one morally conflicted chatbot☢️
-
The deal follows OpenAI’s own multibillion-dollar chip splurge, adding more fuel to what’s increasingly starting to look like an AI spending bubble disguised as a hardware strategy.💸
📉 So what?
The AI “revolution” is beginning to look suspiciously like the financial system it promised to disrupt — a closed loop of trillion-dollar firms investing in, selling to, and hyping each other.
Anthropic says it’s “defining the frontier of AI.” Maybe. But right now, that frontier looks a lot like a cloud server farm full of invoices addressed to other cloud server farms.
Still, for anyone keeping score: Google gets to flex, Anthropic gets to survive, and Nvidia gets to laugh all the way to the (GPU-cooled) bank.
📖 Read more: FT – Anthropic and Google Cloud strike blockbuster AI chips deal
Finimize - too
|
|
|
|
🔍 Expense-gate: employees are using AI to fake receipts — and honestly, it’s kind of genius (until it isn’t)🔍
In news that’ll make every finance director’s blood pressure spike, the Financial Times reports that a new white-collar hustle is spreading: AI-generated fake expense receipts.
Expense software platforms like AppZen and Ramp say there’s been a surge in eerily realistic AI receipts — crumpled edges, barista signatures, perfectly itemised menus — all cooked up by OpenAI’s image tools. In September alone, 14% of all fraudulent receipts were AI-generated, compared to none a year ago.
Turns out, the future of workplace fraud looks a lot like the past — just with better UX🧾
🧾 The new scam starter pack
-
Then: Fiddle with Photoshop or buy dodgy templates online.
-
Now: Type “make a realistic dinner receipt for £42 at The Ivy with a handwritten note” into a chatbot and voilà — dinner with imaginary clients achieved.
-
Platforms say GPT-4o’s upgraded image generator in March triggered the boom, as employees discovered it could conjure fake paper creases better than their local stationery shop.
-
Some receipts are so good that SAP Concur now warns clients to “not trust your eyes.”
Even the fraud detection software fighting back is powered by… you guessed it… AI. The machines are now auditing the other machines.
🧠 The irony economy
SAP found that 70% of CFOs think staff are using AI to falsify expenses. And the barrier to entry? Basically zero. No coding, no Photoshop — just moral flexibility and a Wi-Fi connection.
Of course, AI platforms insist they’re “taking action.” But when the same tools that generate fake receipts are also being used to detect them, it’s starting to feel like a corporate version of Spy vs. Spy — with the finance team stuck in the middle🏦
📉 So what?
This is the most 2025 headline imaginable: employees using AI to cheat expense systems built on AI to detect expense cheats — powered, of course, by the same company whose models started the problem.
It’s funny until you realise that this is fraud at scale — and that generative AI just turned the office petty scam into an enterprise-grade operation.
Or as one fraud expert put it: “There’s zero barrier to entry.” Which, ironically, is exactly what every startup pitch deck says too.
📖 Read more: FT: Businesses deceived by AI-generated fake receipts
|
|
|
|
🧾 CBA to do your own performance review? JPMorgan’s got you🧾
In the latest episode of “AI is doing the parts of your job you hate most,” JPMorgan has rolled out a chatbot to help employees write their own performance reviews — because apparently, self-reflection is now a prompt-based activity. (FT)😅
The bank’s new tool, powered by its in-house large language model LLM Suite, lets employees feed in prompts (“summarise my leadership excellence”) and generates a polished review ready to submit — with just enough corporate humility to sound human.
Managers are told it’s “a starting point,” not a replacement for judgment, and definitely not to use it for salary decisions. But come on: if AI’s writing your appraisal, it’s probably already drafting your redundancy letter too😏
-
JPMorgan’s LLM Suite has already “onboarded” 200,000 users since launch — developers use it to check code, bankers to polish decks, and lawyers to review contracts.
-
The bank spends around $18 billion a year on tech, with CEO Jamie Dimon calling AI “the tip of the iceberg” (which, historically, has been an encouraging metaphor for no one).
-
Boston Consulting Group says employees using AI to draft reviews cut their writing time by 40 percent — freeing them up to, presumably, have meetings about how productive they’ve become.
📉 So what?
Corporate America has officially entered its ChatGPT-writes-your-thoughts-about-ChatGPT phase.
Performance reviews were already a theatre of polite exaggeration — now they’re machine-generated polite exaggeration. The irony? The same executives praising “AI efficiency” are now grading staff for “authentic self-reflection.”
Still, if you can’t be bothered to write your own feedback, the bank’s AI will do it for you. Just don’t be surprised when your next review reads:
“Exceeded expectations in automating oneself out of relevance.”
📖 Read more: FT – JPMorgan offers staff AI chatbot to help write performance reviews
|
|
|
|
|
|
📈 The tools behind the tech📉
📦Product📦
📏Design📏
👩🏿💻Code👩🏿💻
🏢The business behind the tech🏢
|
|
|
|
|
|
🌐Partner Events & Opportunties 🌐 |
|
Below are the top opportunities we want to highlight to you this week! If you want to see more, then check out our new website where we have a whole page dedicated to events and opportunities from us and our partners:
https://www.colorintech.org/events
|
|
|
|
|
|
🙌🏾The latest from the Colorintech team🙌🏾 |
|
|
|
|