Findings

Selling the Story

Kevin Lewis

November 16, 2025

The Authenticity of Purpose Claims: Firm Capacity and Job Seeker Responses to Recruitment Efforts
León Valdés et al.
Management Science, forthcoming

Abstract:
Whereas corporate purpose involves a claim made to galvanize stakeholders, recent research on the topic has not examined it as a claim. Given the information asymmetry that many evaluators of purpose claims face, a key question concerns the conditions under which they are not simply viewed as cheap talk but are instead perceived as authentic. We argue that the difficulty and future orientation inherent to purpose claims make firm capacity a key source of authenticity and, ultimately, positive evaluations. We examine these ideas in the labor market context, where employers often present purpose claims to job seekers facing information asymmetry. First, we develop and validate a novel measure of purpose claim strength using a combination of topic modeling, dictionary-based validation, and experimental validation. Using this measure, we test our capacity hypothesis with job application field data, using firm size as a proxy for capacity. We find that high-purpose job posts receive approximately 50% more applications than low-purpose job posts when the firm has more than 1,000 employees, but only receive about a 10% increase when the firm has fewer than 50 employees. In a second study, we use vignette experiments to directly test our hypothesized mechanism. We show that, conditional on a strong purpose claim, size manipulations shape capacity perceptions, leading to greater perceived authenticity and increased application likelihood. Next, holding size constant, we show that an affiliation-based manipulation leads to similar results. Our paper helps scholars understand what gives authenticity to purpose claims and helps practitioners understand how they can more effectively communicate purpose.


A Definition of AGI
Dan Hendrycks et al.
Center for AI Safety Working Paper, October 2025

Abstract:
The lack of a concrete definition for Artificial General Intelligence (AGI) obscures the gap between today's specialized AI and human-level cognition. This paper introduces a quantifiable framework to address this, defining AGI as matching the cognitive versatility and proficiency of a well-educated adult. To operationalize this, we ground our methodology in Cattell-Horn-Carroll theory, the most empirically validated model of human cognition. The framework dissects general intelligence into ten core cognitive domains-including reasoning, memory, and perception-and adapts established human psychometric batteries to evaluate AI systems. Application of this framework reveals a highly "jagged" cognitive profile in contemporary models. While proficient in knowledge-intensive domains, current AI systems have critical deficits in foundational cognitive machinery, particularly long-term memory storage. The resulting AGI scores (e.g., GPT-4 at 27%, GPT-5 at 57%) concretely quantify both rapid progress and the substantial gap remaining before AGI.


Readers Prefer Outputs of AI Trained on Copyrighted Books over Expert Human Writers
Tuhin Chakrabarty, Jane Ginsburg & Paramveer Dhillon
Columbia University Working Paper, November 2025

Abstract:
The use of copyrighted books for training AI models has led to numerous lawsuits from authors concerned about AI's ability to generate derivative content. Yet it's unclear whether these models can generate high quality literary text while emulating authors' styles/voices. To answer this we conducted a preregistered study comparing MFA-trained expert writers with three frontier AI models: ChatGPT, Claude, and Gemini in writing up to 450 word excerpts emulating 50 award-winning authors' (including Nobel laureates, Booker Prize winners, and young emerging National Book Award finalists) diverse styles. In blind pairwise evaluations by 159 representative expert (MFA-trained writers from top U.S. writing programs) and lay readers (recruited via Prolific), AI-generated text from in-context prompting was strongly disfavored by experts for both stylistic fidelity (odds ratio [OR]=0.16, p < 10^-8) and writing quality (OR=0.13, p< 10^-7) but showed mixed results with lay readers. However, fine-tuning ChatGPT on individual author's complete works completely reversed these findings: experts now favored AI-generated text for stylistic fidelity (OR=8.16, p < 10^-13) and writing quality (OR=1.87, p=0.010), with lay readers showing similar shifts. These effects are robust under cluster-robust inference and generalize across authors and styles in author-level heterogeneity analyses. The fine-tuned outputs were rarely flagged as AI-generated (3% rate versus 97% for in-context prompting) by state-of-the-art AI detectors. Mediation analysis reveals this reversal occurs because fine-tuning eliminates detectable AI stylistic quirks (e.g., cliché density) that penalize in-context outputs, altering the relationship between AI detectability and reader preference. While we do not account for additional costs of human effort required to transform raw AI output into cohesive, publishable novel length prose, the median fine-tuning and inference cost of $81 per author represents a dramatic 99.7% reduction compared to typical professional writer compensation. Author-specific fine-tuning thus enables non-verbatim AI writing that readers prefer to expert human writing, thereby providing empirical evidence directly relevant to copyright's fourth fair-use factor, the "effect upon the potential market or value" of the source works.


Robust AI Personalization Will Require a Human Context Protocol
Anand Shah et al.
MIT Working Paper, July 2025

Abstract:
Robust AI personalization will require a Human Context Protocol (HCP): a user-owned, secure, and interoperable preference layer that grants individuals granular, revocable control over how their data steers AI systems. By replacing siloed, behavior-inferred signals with direct preference articulation, HCP unifies fragmented data, lowers switching costs, and enables seamless portability across AI services, fostering a more competitive ecosystem. This paper outlines core design principles-natural-language preference storage, scoped sharing, and strong authentication with revocation-that extend earlier personal-data architectures to the scale and stakes of modern generative AI. Centering control in users, HCP is not merely a technical convenience but a necessary foundation for AI systems that are genuinely personal, interoperable, and aligned with diverse human values.


Open at the Core: Moving from Proprietary Technology to Building a Product on Open Source Software
Jérémie Haese & Christian Peukert
Management Science, forthcoming

Abstract:
Firms are increasingly moving away from proprietary technology to building commercial products on top of open source software. However, it is unclear how such commercialization of open source affects contributions from the community, as well as product quality and firm performance. Web browser technology provides a unique setting to study such questions. The largest open source project is Chromium, led by Google, which serves as the core of various web browsers. Unexpectedly, Microsoft announced a drastic change in strategy in 2018 and adopted Chromium in a complete redesign of the web browser Edge. Unique data lets us compare Chromium to other open source technologies, and Chromium-based web browsers to other web browsers. We find that overall development activity increases after Microsoft adopts Chromium, predominantly because Microsoft starts to contribute to the project. We also see a modest increase in contributions from external developers. We further document an increase in scrutiny, evident from an increase in the number of individuals performing code reviews and a surge in security vulnerability reporting. We also find positive effects for Microsoft. Edge makes a giant leap in functionality, moving it on par with the market leader Chrome. With the adoption of Chromium, Microsoft fixes more bugs, accelerates release cycles, and increases the market share of Edge at the expense of other less popular Chromium browsers. We discuss a number of general implications for managers and policy.


Shrinkflation and Consumer Demand
Aljoscha Janssen & Johannes Kasinger
Marketing Science, forthcoming

Abstract:
This study investigates shrinkflation — the practice of reducing product size while maintaining or slightly changing prices — in the U.S. retail grocery market. We analyze a decade of retail scanner data to assess the prevalence and patterns of product size changes across various product categories. Our findings show that approximately 1.92% of products have been downsized. When comparing total sales, product downsizing is more than five times as prevalent as upsizing. Product downsizing typically occurs without a corresponding decrease in price and is widespread across product categories. Consequently, consumers end up paying more per unit volume. We further find that consumers are more responsive to price adjustments than to changes in product size. This finding suggests that reducing product sizes is an effective strategy for retailers and manufacturers to increase margins or respond to cost pressures, offering valuable implications for retailers and policymakers.


Insight

from the

Archives

A weekly newsletter with free essays from past issues of National Affairs and The Public Interest that shed light on the week's pressing issues.

advertisement

Sign-in to your National Affairs subscriber account.


Already a subscriber? Activate your account.


subscribe

Unlimited access to intelligent essays on the nation’s affairs.

SUBSCRIBE
Subscribe to National Affairs.