Two ways to see AI. I’m choosing this one.


Hi Reader,

AI is moving at a dizzying pace. That speed has unlocked new vulnerabilities and given bad actors some powerful tools. It is easy to stare at the headlines and picture a dystopian future. I recently attended a town hall Q&A put on by the CTO of the company I work for, and it left me with some thoughts that I wanted to share.

The discussion was centered around using and implementing AI responsibly and ethically, which naturally led to a lot of questions around all the bad possibilities the world is seeing unfold with AI (and admittedly, there are a lot of possibilities we don't even know about yet.)

Here is the choice I keep coming back to.

  1. I can lean into fear and assume the worst.
  2. Or I can remember the other side of the coin. If AI helps bad actors move faster, it also helps good actors move just as fast. That levels the playing field more than we think.

Since there is no data proving there are more bad actors than good, the future we anchor to is a decision. Perspective matters. Where do we place our hope and conviction?

When I zoom out, this moment feels a lot like the early internet. The dot-com era was chaotic. Fraud spiked. Privacy took hits. It was not smooth. And yet, the internet eventually matured and transformed how we learn, connect, and work. That same internet let me build a career I love, meet with doctors across state lines, and access resources that would have been out of reach before. Messy beginnings. Meaningful progress.

AI is tracking along a similar arc. A few steps back before bigger leaps forward.

  • In medicine, AI is helping spot patterns humans miss, accelerating diagnosis and speeding drug discovery. Lives will be saved.
  • In infrastructure, data centers use a lot of energy, but AI is part of the fix. Smarter cooling and predictive maintenance are already cutting waste.
  • In cybersecurity, yes, scams get more sophisticated. At the same time, AI is catching fraud in milliseconds and flagging anomalies faster than rule-based systems ever could.

Yin and yang. Without the bad, we cannot fully experience the good. Without the good, we do not recognize the bad. There is no meaningful reward without some level of risk. If we believe AI might be one of humanity’s biggest risks, we also have to acknowledge the potential for outsized reward.

My stance is simple. This is not about superiority. It is about good vs. evil and human vs. AI-as-a-tool. Humans will learn to steer this. The good actors who want to enhance the human experience will outpace the ones trying to harm it. No one actually wants a dystopian world. We make that future less likely by learning, staying open, building conviction, and speaking up for what is right.

Try this this week

  • Pick one useful belief about AI and practice it. Mine: “AI is a tool that amplifies intent. I will use it to create value.”
  • Choose one tiny workflow to test. Draft a tough email. Summarize a meeting. Build a checklist. Measure the time saved.
  • Share one hopeful AI use case with your team or family. We change the narrative when we share better stories.
  • Look around you for how AI is being utilized and advocate for right vs. wrong. Advocate for ways to use AI that make humanity better, not worse.

Your turn

Where are you landing on this choice right now? What is one specific AI use case that gives you hope? Hit reply and tell me. I read every note.

Have a great week!
Chelsea


Good reads and quick links

600 1st Ave, Ste 330 PMB 92768, Seattle, WA 98104-2246
Unsubscribe · Preferences

Chelsea Lewis

Hey, I’m Chelsea. I connect dots by day and chase ideas by night, fueled by snacks and a sassy cattle dog who refuses to be tired. I’m designing, writing, and building our next home while keeping timelines tidy and life delightfully messy. This newsletter is the cozy corner where I share the real process, from color-coded plans to what-on-earth moments. Come for the creativity, stay for the candid, slightly unhinged honesty.

Read more from Chelsea Lewis

Last week I let myself sit in a win. That sentence alone feels wild to type. My default is to credit everyone else and quietly plan for what might break next. But I'm practicing a new rule: if I'd own the failure, I'm allowed to own the success. The project: our AI engineering team had piloted an internal chatbot for support reps. Early tests looked promising, but they hadn't begun to drive adoption when I was brought in to manage the project. That meant translating "cool pilot" into...

Last week I talked about how AI is moving at a dizzying pace. Lately it feels like life is moving that way too. Life is really life-ing. In a recent 1:1 I told my boss it feels like my days are a string of projects, personally and professionally. The Project Manager in me can spot the friction and wants to clear every blocker. That’s what we do, right? But some projects aren’t on my timeline. Which brings me to Lewis Loft 3.0. (For those who might be new, Lewis Loft 3.0 is our next house, the...

Hi Reader, Ali Abdaal spent 18 months using AI like a coach, not a guru with a ring light. He got the best results when he fed context, asked for pushback, and used personas to widen perspective. He warns against treating AI as truth and suggests judging outputs by usefulness. In this video, he demos long-form interview mode, memory-anchored reflection from past notes, and future-self chats that surface values and tradeoffs. He shared several prompts that I thought were both interesting and...