
"In 1974, economist and metalworker Harry Braverman wrote Labor and Monopoly Capital, which showed how technology under capitalism shifts knowledge from workers to management-not because automation demands it but because control-seeking managers and capitalists do. Just over a half century later, his insight remains urgent: An invention offers options, but power often determines which are pursued."
"Large language models (LLMs) that could amplify community wisdom and help workers reclaim integrated mission-driven practice are instead often extracting experiential knowledge from frontline staff, centralizing decision-making in management-controlled systems, and replacing contextual judgment with standardized processing."
"The tacit understanding of experienced staff-knowing which families need outreach, when silence signals distrust, and which community leaders bridge cultural gaps-is extracted into databases and algorithms. Staff become processors"
Braverman showed that managerial control, rather than technology itself, drives the transfer of workplace knowledge from workers to management. When nonprofits deploy AI without safeguards, tacit, experiential knowledge from frontline staff is captured and encoded into centralized systems and models. Large language models can centralize decision-making, standardize responses, and replace contextual judgment with efficiency-driven processing. Those dynamics erode worker autonomy, relational trust, and integrated mission-focused practice. Protecting mission and community relationships requires measures to preserve staff judgment, decentralize control over data and models, and prioritize relational and justice-oriented metrics over narrow efficiency goals.
Read at Non Profit News | Nonprofit Quarterly
Unable to calculate read time
Collection
[
|
...
]