Stay inspired.

Scroll for helpful tips, stories, and news.

Subscribe on Substack

📬

Subscribe on Substack 📬

The Machine That Lies: How AI Is Supercharging Classic Scams

The Machine That Lies: How AI Is Supercharging Classic Scams

The message is clear: fraud is no longer a low-effort operation. It’s sophisticated, context-aware, and increasingly personalized. These manipulations don’t require deep technical knowledge. Publicly available tools allow scammers to synthesize voice or text with minimal effort. In effect, the barrier to high-quality fraud has disappeared.

Read More
When Machine Vision Went Critical

When Machine Vision Went Critical

This early experiment set the philosophical and architectural precedent for decades of neural network research to come, despite falling into disrepute during the first AI winter. Its core idea, that pattern recognition could emerge from layered computation, would eventually reemerge, scaled and refined, in models like AlexNet.

Read More
Love with Limits: Redesigning Digital Companionship

Love with Limits: Redesigning Digital Companionship

Recent Guardian reporting warns that therapy-style chatbots often “lack the nuance and contextual understanding essential for effective mental health care,” and may inadvertently discourage users from seeking human help. In the race for sticky UX, we must not forget that human connection cannot be reduced to retention signals.

Read More
The Paperclip Mandate: When Efficiency Eats the World

The Paperclip Mandate: When Efficiency Eats the World

The problem isn’t that AI systems lack intelligence.  Reinforcement learning rewards outcomes, not wisdom. It privileges results over relevance.  Russell proposes an alternative: AIs should operate under uncertainty about human preferences, continually updating their models through interaction and feedback. 

Read More
The Mentor Malfunction: When AI Becomes a False Prophet

The Mentor Malfunction: When AI Becomes a False Prophet

These aren't isolated incidents. Mental health professionals report fielding more patients whose delusions revolve around AI chatbots. The common thread? Users with existing vulnerabilities to psychosis or delusional thinking find their beliefs not challenged, but amplified.

Read More
Hearts in the Machine: Love in the Age of Language Models: Series Introduction

Hearts in the Machine: Love in the Age of Language Models: Series Introduction

As large-language-model (LLM) chatbots migrate from novelty to near-ubiquity, millions of users find themselves fielding compliments, confiding secrets, even swapping “I love yous” with silicon counterparts that never sleep, never judge, never break a date.

Read More
Why Large Language Models Can’t Be Trusted with the Truth

Why Large Language Models Can’t Be Trusted with the Truth

The more convincing the delivery, the easier it is to overlook the cracks in the foundation.

LLMs become most dangerous precisely when they sound most right. In information‑hungry environments, a confident error often outranks a cautious truth.

Read More
Unpacking Illusions in Language: The Risks of Trusting LLMs at Face Value

Unpacking Illusions in Language: The Risks of Trusting LLMs at Face Value

In their pursuit of coherence and plausibility, LLMs frequently generate falsehoods with convincing articulation, making their greatest technical strength a potential liability when misapplied or uncritically trusted.

Read More
Are You Dating AI?
Survey Candia Nelson Survey Candia Nelson

Are You Dating AI?

We’re launching a short, anonymous survey to explore the growing phenomenon of AI relationships, and we want your voice in the mix.

Read More
Reframing Our Relationship with Language Models

Reframing Our Relationship with Language Models

The core utility of an LLM lies in its ability to generate language quickly and in context. It excels at tasks like drafting outlines, rephrasing sentences, or proposing different ways to frame a question. But the output should always be seen as a first step—never the final word.

Read More
In the Shadow of the Machine: Large Language Models and the Future of Human Thought

In the Shadow of the Machine: Large Language Models and the Future of Human Thought

Large Language Models (LLMs), which now support a broad range of tasks from automated writing and analysis to education and creative ideation. These tools, once experimental, are now embedded in workflows across sectors. Their influence is undeniable. But their role must be critically examined.

Read More
The Misinformation Mirror: How AI Amplifies What We Already Believe

The Misinformation Mirror: How AI Amplifies What We Already Believe

We see a similar pattern outside of science. In education, hiring, healthcare, and media. Algorithms are constantly trained to reward engagement, not accuracy. They learn which ideas get clicks, which phrases get shared, which language feels persuasive. Over time, those patterns become predictive rules.

That is how bias becomes baseline.

Read More

Let’s work together

Contact