Where I am coming from
I started McQueen Analytics in July 2020, but this trust obsession goes back much further. My work experience in this lane reaches back to 2008, across operations, analytics, quantitative modeling, optimization work, and nonprofit environments where data had to serve real people, not just dashboards.
Across every chapter, one question stayed with me: why do people decide something is credible, and what makes that credibility collapse?
I unpack more of the framework history in a brief history of trust research.
What I see right now
The internet feels mid-transition. AI-assisted language is everywhere, synthetic accounts are easier to spin up than to verify, and distribution systems still reward output speed more than source integrity.
Capability accelerated faster than norms. That gap is where trust erosion starts.
The generational contrast is real
I keep watching this through a cross-generational lens. Older Gen Z is fully in the workforce while Millennials and Gen X still remember early-web trust rules. At the same time, many Boomers who warned their kids about online scams now face AI voice fraud and synthetic social feeds themselves.
Two moments that pushed me to reboot this site
First, I recognized AI-backed writing patterns in a thread critiquing AI slop. The writer later confirmed AI assistance. That paradox stuck with me.
Source: X thread
Second, I watched reporting on fake personas and inherited authority through purchased web properties. It was a clean example of synthetic trust-laundering in the wild.
Source: YouTube investigation
Why the radiationbox thesis fits this moment
I see this transition as more than a model story. It is also about energy and governance. As AI infrastructure grows, nuclear and fusion are moving from abstract debate into practical planning.
That is what the name means to me: radiation can power extraordinary progress and irreversible harm. Outcomes depend on incentives, guardrails, and whether we optimize for long-term trust or short-term extraction.
What I want this blog to do
I want this space to document patterns early, pressure-test ideas publicly, and keep trust at the center of the AI transition. This is my perspective page, not an official company memo.
Working note: I will keep this as the canonical personal origin post and link future threads back here.