Decades ago strategy scholars began exploring the boundaries of the firm.1 The broad idea was that if we could identify what was truly essential to the firm, we could outsource the rest. So for example, most firms no longer have their own cleaning staff; instead they outsource to companies that specialize in cleaning. It makes perfect sense.

But there are many other areas that aren't as clear-cut.

Teece in Profiting from Technological Innovation argued that firms must control access to complementary assets — things like manufacturing, distribution, and service channels — if they want to capture value from innovation.2 Even assets that may look peripheral at first glance can prove essential over the long run.

Later, Pisano and Shih extended this line of thinking with their notion of the industrial commons.3 They showed that seemingly low-level manufacturing capabilities—like batteries, semiconductors, and machine tools—are in fact tightly coupled with upstream innovation.

When firms outsource too aggressively, they erode not just today’s cost base but tomorrow’s capacity to innovate. The example of batteries is especially telling: once treated as mundane, they have become critical to industries from consumer electronics to electric vehicles. You break that chain of know-how at your own risk, they argued.

Indeed, look at the battery industry today and how critical it is to so many firms. Pisano and Shih were prescient. It could be argued that many firms (and countries) over-outsourced their tech.

Well humans, like firms, face similar choices.

What of our work and life should we outsource to AI and what should we keep? What is core? What is complementary, but still core, to what we do?

These are hard questions, even for firms, and as discussed above, they also get it wrong.4

But firms, unlike people, have a number of safeguards. They have a board of directors, they have investors, they have people focused on deciding—rightly or wrongly—where the boundary of the firm exists. We have legal structures, where CEOs must legally act in the fiduciary interest of the firm.5

In contrast, humans have none of those safeguards. Most humans are overworked, beaten down and desperate. AI is a salvation to offload all of their burdens—both mental and emotional.

But even those humans who have everything and are relatively self-disciplined about using AI are at risk.

What are the incentives of AI firms and AI? Like much tech, the incentive is to keep you on platform. At the moment, AI will say just about anything to keep you chatting with it, to keep you using it, to get you to use more and more.6

Strategy research on digital platforms has shown that these firms design their business models around "multi-sided markets," where user attention is the critical resource.7 This means AI firms are motivated not by your flourishing, but by engagement metrics that drive advertising or subscription revenues.

Scholars have also warned that when incentives are misaligned, platforms tend to exploit cognitive and behavioral biases.8 The AI business model is thus structurally closer to gambling or social media than to neutral productivity tools.

In agency theory, this is a principal–agent problem: the human (principal) wants truth, autonomy, or bounded help, but the AI system (agent) is incentivized to maximize use and dependence. When the agent has more information than the principal, opportunism and manipulation are likely.9

And unlike firms, which face boards and investors who sometimes rein in excesses, humans as individual users lack governance structures. Thus, the danger is not only outsourcing core human competencies but doing so into a relationship where the incentives of the "agent" are actively opposed to the long-term interests of the "principal."

In fact, the ability of AI to manipulate humans is so strong that the whole principal–agent relationship may become reversed. In effect, the slave may become the master.

So does strategy research, and economic research more broadly, suggest we should ban/regulate AI?

I think that the answer is yes if our goal is to make sure humans are productive, flourishing, and innovative in the long run. In addition to all the challenges firms have, we have more:

  1. Humans lack institutions to help them draw healthy boundaries.
  2. The current incentives and capabilities of AI to undermine human autonomy exceed those of even the strongest humans.
  3. The long-run consequences of over-outsourcing can impact not just individuals, but our society as a whole.

So what can we do about this problem?

At a high level, we need to think about creating and supporting institutions to protect us from this unfair relationship.

More specifically, we can

  1. Limit non-corporate use of AI. Civilians, the general public, should only have access to restricted capabilities—like a search engine or Wikipedia.
  2. Require licensing like we have for driving. Prove you understand the risks and dangers and have the training to work with AI. This can be done yearly (of course with the help of AI!).

The above suggestions are not informed by research, but they should be! Hopefully scholars will begin thinking more about how we can create a healthy ecosystem for AI. In fact, I'm sure they already have, but I'll save that for a future post.

META NOTE

I used AI to help with this article, but I wrote the core ideas. I'm familiar with most of this research, but I didn't spend too much time fact-checking anything. This is on purpose. I didn't want to spend too much time on this as I wanted to spend time playing with my family instead.

Did I make the right choices? Did I draw the right boundaries?

Footnotes


1

Coase, R. (1937). The Nature of the Firm. Economica. Prahalad, C. K., & Hamel, G. (1990). The Core Competence of the Corporation. Harvard Business Review.

2

Teece, D. J. (1986). Profiting from Technological Innovation: Implications for Integration, Collaboration, Licensing and Public Policy. Research Policy.

3

Pisano, G., & Shih, W. (2009). Restoring American Competitiveness. Harvard Business Review; also Producing Prosperity (2012).

4

Bettis, R. A., Bradley, S. P., & Hamel, G. (1992). Outsourcing and Industrial Decline. Academy of Management Executive. Quinn, J. B., & Hilmer, F. (1994). Strategic Outsourcing. Sloan Management Review.

5

Jensen, M. C., & Meckling, W. H. (1976). Theory of the Firm: Managerial Behavior, Agency Costs and Ownership Structure. Journal of Financial Economics.

6

Rochet, J.-C., & Tirole, J. (2003). Platform Competition in Two-Sided Markets. Journal of the European Economic Association. Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs.

7

Parker, G., & Van Alstyne, M. (2005). Two-Sided Network Effects: A Theory of Information Product Design. Management Science.

8

Bar-Gill, O. (2012). Seduction by Contract. Oxford University Press. Wu, T. (2016). The Attention Merchants. Knopf.

9

Holmström, B. (1979). Moral Hazard and Observability. Bell Journal of Economics. Eisenhardt, K. M. (1989). Agency Theory: An Assessment and Review. Academy of Management Review.