Macro Effects

We're living through the micro-impact phase of AI integration. As a developer and learner, I've felt the immediate benefits: better code generation, better research assistance, faster workflows.

What does this tell us about the next 10 years? What are the compounding and emergent effects of these micro-improvements?

I personally think the business case for AI is straightforward: more productivity, efficiency, and profitability. Do more, faster, with less. But this framing assumes we know and are confident in what "more" should look like.

If agents take on more of our routine cognitive tasks, what does that free us up to do? I think the easy answer is whatever humans can do better than machines. I'm not sure we even know exactly what those things are, and how to measure whether we're actually focusing on them.

Is it taste-making? But can we not post-train LLMs to learn different "tastes" (e.g. RL, LoRAs, etc.)?

It's pretty clear that AI will take over "menial, repetitive tasks," but the reality is more complex and difficult to predict as LLMs improve their ability to "reason" and "think critically."

What then are the human-machine distinctions?

Interestingly enough, we've continued to describe AI capabilities in a way that is very similar to how we describe human capabilities. We say models are getting better at reasoning and critical thinking.

Are we assuming that human thought processes are the gold standard for intelligence, or are we just using familiar language to describe something fundamentally different?

I personally don't think the question is whether AI can think like humans, but whether human-like thinking is even the most effective approach for certain problems. Perhaps machines learn and reason best when they don't model human thought processes but when they find entirely different pathways to solutions.

So then what should we as humans focus on? If machines may eventually surpass us in intelligence and a slew of different capabilities, what do we bring to the table...our cognitive biases, emotional reasoning, and contextual understanding? Can an LLM be trained to read between the lines, to "feel" nuance?

I often find myself drawn to music, art, and literature with a backstory. When I know the creator, or know their background. When I can appreciate how their life has impacted their work. Their brand and personality.

Could an LLM achieve the same? Would that require anthropomorphizing them? Embue human-ness and mortality to them?

While I have no answers to offer to any of the questions I've posed, I do have hope on how further AI integration could change the way we work:

I personally believe a lot of corporate structures often force people into short-term thinking. Artificial scarcity (often presented as limited time) pushes employees to optimize for whatever gets us the fastest promotion, the quickest wins, the highest visibility. Individual (rational) decisions lead to outcomes that may not always be what's the best for the company, society, or even ourselves in the long run.

Rather than replacing jobs, AI brings productivity gains, and we eliminate the time and resource scarcity that forces these trade-offs. While I don't think we'll completely eliminate the incentive structures that force employees to take on more individually beneficial work, they'll be less pervasive.

Perhaps the advent of AI can become a forcing function to design work environments that align individual incentives with collective good. A workplace that's more joyful, playful even! One that is collaborative and progressive, where people have the luxury of thinking long-term, of building things that matter rather than things that advance careers.

I'm not sure how we would measure progress toward this vision. Traditional business metrics like productivity, efficiency, profitability capture some of the value AI brings but miss the bigger questions. Perhaps we'll need to discover and name new ways of measuring human flourishing, creative fulfillment, and collective progress.

"...we have seen that virtually anything that can be measured can be optimized." (https://www.jasonwei.net/blog/asymmetry-of-verification-and-verifiers-law)