The theoretical framework behind this proposal is mechanical materialism, a philosophical system argued for by the economist Paul Cockshott and the philosopher Katerina Kolozova. It seeks to extend the Newtonian approach to science to sociology, unifying both under a combined framework that employs conservation laws, statistical mechanics, and information theory. Researchers associated with this program have compiled long-run profit rate datasets reaching back to 1870, conducted input-output studies in over forty countries, and developed econophysics models of money distribution.
Kolozova draws on François Laruelle’s “non-philosophy” to argue that dialectics should be understood as a method of argumentation in its original Greek sense rather than as an ontological principle about the structure of reality. This position, developed in Toward a Radical Metaphysics of Socialism (Punctum Books, 2015), performs what Kolozova calls a “non-Marxist reading of Marx” to recover his descriptive language about economic processes. Defending Materialism (Bloomsbury Academic, 2024), by Kolozova, Cockshott, and Michaelson, connects Marx to the history of scientific materialism encompassing mechanics, information theory, Turing computation, and Boltzmann statistical mechanics.
Cockshott’s “Newtonian Marx” thesis draws a direct structural parallel between Capital and the Principia. The foundational law of Capital, on this reading, is a conservation law: in the exchange of commodities, abstract socially necessary labor time is conserved. This parallels conservation of momentum or energy in Newtonian mechanics. Just as Newton defines mass as the measure of undifferentiated matter, Marx defines a corresponding social quantity, undifferentiated abstract labor time. The law of surplus value then functions as a universal law of motion whose operations Marx tracks through profit, interest, and rent, analogous to Newton tracking gravitational force through planetary orbits.
Marx observes that commodity exchange is an equivalence relation: it is reflexive, symmetric, and transitive. If twenty yards of linen exchange for one coat, and one coat exchanges for ten pounds of tea, then twenty yards of linen exchange for ten pounds of tea. Marx states transitivity implicitly through the structure of what he calls “Form B” (the Expanded Form of Value), where one commodity’s value is expressed in a series of others. The transitive closure of these equivalences is what drives the transition to “Form C” (the General Form). Marx establishes symmetry in a subsequent passage and treats reflexivity as too obvious to require statement. From this observed equivalence relation, Marx deduces that there must be some abstract third thing conserved in the exchange, some substance, value, of which both sides contain the same quantity. Cockshott argues that this deduction is structurally identical to Newton’s deduction of the conservation of momentum from the observed behavior of colliding pendulums, an equivalence relation between physical states indicates a conservation process.
Conservation of energy was formalized by Mayer (1842), Joule (1843–1845), and Helmholtz (1847). Marx was writing Capital in the 1860s. The structural homology is striking. Value, like energy, appears in various forms (commodity, money, capital) while its substance, abstract labor time, is conserved in exchange. Marx’s distinction between labor and labor-power parallels Watt’s distinction between work and power so closely that, as Cockshott observes, it would have been immediately comprehensible to any Victorian reader familiar with steam engineering. The profit-generating circuit M → C → M′ (money → commodity → more money) then becomes a symmetry-breaking problem. If value is conserved in exchange, surplus value cannot arise from exchange itself but must be explained by the production process, where the commodity labor-power creates more value than it costs.
This reading is controversial. Marx’s own letters show he sided with Hegel’s low opinion of Newton, preferring Kepler. His references to Newton in Capital are peripheral (a footnote analogy in Volume I, Chapter 1, comparing the difficulty of analyzing economic forms to the analysis of physical bodies) and nowhere does he draw the systematic structural parallel that Cockshott proposes. Defending Materialism is frank about the tension, acknowledging that Marx’s dialectical presentation was “a shortcut” that “lacked the modern standards of scientific rigor, relying on dialectical sleight of hand.” Marx’s argument works because he stood on the shoulders of predecessors, Smith and Ricardo, whose observational work had already established the connection between labor content and price. What matters, on this account, is the structural parallel, conservation laws generating laws of motion, which connects Marx’s political economy directly to the econophysics research program and its empirical results, whether or not Marx himself would have accepted the connection.
Why Not Dialectics
Neither Marx nor Engels ever used the exact phrase “dialectical materialism.” Engels used “materialist dialectic” (materialistische Dialektik) in Anti-Dühring (1878) and Ludwig Feuerbach (1886), a related but, as Tony Burns notes, “not quite the same” formulation. The term was coined by the self-educated German tanner Joseph Dietzgen in his 1887 essay “Excursions of a Socialist into the Domain of Epistemology.” Georgi Plekhanov popularized it from 1891, and Stalin’s 1938 Short Course codified diamat as Soviet state philosophy. Engels’s Dialectics of Nature (unpublished in his lifetime, assembled from notes dating to 1873–86) identified three supposedly universal laws: the transformation of quantity into quality, the interpenetration of opposites, and the negation of the negation, all governing nature, society, and thought alike.
The critique of dialectics has a long history within Marxism. Lukács (1923) argued that dialectics involves a subject-object relation absent in nature, implying Engels’s extension of dialectical laws to natural processes was illegitimate. Sidney Hook (1933) argued that the apparently mysterious character of the Marxian dialectic is due to nothing more than its Hegelian terminology. Lucio Colletti (1969) drew on Kant’s concept of Realopposition to argue that Marx’s social analysis deals with real oppositions (class conflict, capital versus labor) not dialectical contradictions. Since dialectical contradiction violates the principle of non-contradiction, Colletti concluded that insofar as Marxism relies on it, Marxism cannot be considered a science. His teacher Galvano Della Volpe had anticipated this in Logic as a Positive Science (1950), arguing Marx’s method follows a Concrete-Abstract-Concrete scientific circle, like Galileo’s, not Hegel’s speculative descent from Abstract to Concrete.
The Analytical Marxists of the 1980s rejected dialectics wholesale, with Jon Elster arguing in Making Sense of Marx (1985) that Marx’s explanatory apparatus should be rebuilt on methodological individualist foundations, replacing functional explanation with game theory and the logic of collective action, and treating dialectics as an obstacle to clarity rather than a source of insight.
Althusser (1962) attempted to replace Hegel’s “simple contradiction” with “overdetermination,” a model of multiple contradictions each with its own temporality and relative autonomy. Thomas Nail’s Marx in Motion (Oxford University Press, 2020) offered an anti-dialectical reading from another direction, arguing that Marx’s deepest materialism is Epicurean and kinetic rather than dialectical. As Kołakowski summarized, the laws of dialectics consist partly of truisms, partly of philosophical dogmas, partly of nonsense, and partly of statements that could be any of these depending on interpretation.
When Stalin elaborates what “contradictions inherent in all things” means (that things develop and die, that social forms replace one another) he uses not dialectics but what Marx called the materialist theory of history. That quantitative change leads to qualitative change is, in physics, the phenomenon of phase transitions, such as water boiling into steam at 100°C at standard atmospheric pressure. This is describable in terms of discontinuities in the entropy function without the ambiguous terminology of “quantity” and “quality.” The dialectical vocabulary adds nothing to what mechanics and thermodynamics already provide, while introducing persistent opportunities for confusion between metaphorical and literal contradiction.
The positive case for mechanical materialism rests on the distinction between contradictions and laws of transformation. A dialectical materialist holds that contradictions are objective features of material reality resolvable only through material transformation. A mechanical materialist holds that what matters are not contradictions but regularities governing how systems change states under specified conditions. The distinction determines whether economics is to be practiced as a deductive philosophy, deriving conclusions from dialectical first principles, or as a physical science, deriving testable predictions from conservation constraints and statistical mechanics. The econophysics program takes the latter path.
Information Theory and Thermodynamics
If the conservation-law reading of Capital is to be more than a structural analogy, it requires a bridge between the mathematics of physics and the mathematics of economics. That bridge is the identity between thermodynamic entropy and information entropy.
The mathematical relationship between Shannon entropy (H = −Σ pᵢ log₂ pᵢ, measured in bits) and Boltzmann-Gibbs entropy (S = −k_B Σ pᵢ ln pᵢ, measured in joules per kelvin) is not an analogy. They share the same form. Both are the negative expected value of the logarithm of a probability distribution. The two expressions differ by a change of logarithmic base and a dimensional constant. Since ln x = log₂ x · ln 2, Shannon’s formula can be rewritten as H = −(1/ln 2) Σ pᵢ ln pᵢ, and the Boltzmann-Gibbs formula becomes S = k_B · ln 2 · H. The entire difference reduces to the factor k_B · ln 2, which converts between the information-theoretic unit (the bit) and the thermodynamic unit (joules per kelvin). This is a consequence of the fact that both quantities measure the same underlying thing. The logarithmic measure of the number of microstates compatible with a given macrostate. Rolf Landauer’s 1961 IBM paper established the physical bridge: erasing one bit of information requires minimum energy dissipation of k_B · T · ln(2), approximately 0.018 eV at room temperature. This was experimentally verified in 2012 by Bérut et al. in Nature (483, 187–189), who measured the heat dissipated when erasing a single bit stored in a colloidal particle in an optical trap, confirming saturation at the Landauer bound. Subsequent experiments extended verification to nanomagnetic bits (Hong et al., 2016) and quantum molecular magnets (Gaudenzi et al., 2018). Landauer’s principle follows from the second law of thermodynamics, making it a consequence of fundamental physics. E.T. Jaynes’s 1957 Physical Review papers reframed statistical mechanics as maximum entropy inference. The Boltzmann distribution emerges from maximizing Shannon entropy subject to a constraint on average energy.
The philosophical implication is that information is physical, not Platonic. It requires material substrates, has measurable thermodynamic costs. Its capacity in any region of space is bounded by physical quantities, the Bekenstein bound S ≤ 2πk_BRE/ℏc, where R is the radius of an enclosing sphere, E is total mass-energy, ℏ is the reduced Planck constant, and c is the speed of light. Entropy is an objective property of physical systems. Stars explode and increase entropy regardless of whether anyone observes them. Information encoded in DNA exists independently of any human subjective involvement. It is not information because a person can read it. A person can read it because it is information. Through this connection, Cockshott, Cottrell, Michaelson, Wright, and Yakovenko argue in Classical Econophysics (Routledge, 2009) that productive labor involves objective physical processes of entropy reduction requiring information, a physical entity. Human labor modifies the world by imposing low-entropy order on high-entropy raw materials, guided by information encoded in skills, blueprints, and action programs. This situates economics within thermodynamics and information theory from the ground up. It explains why the same maximum entropy formalism that governs molecular systems can be applied to economic ones.
Labor Values Predict Market Prices
With the theoretical framework in place, the empirical question becomes: If abstract labor time is the substance conserved in commodity exchange, then do labor values predict market prices?
Input-output studies comparing computed labor values to observed market prices have been replicated across many countries since the 1980s. The methodology computes the total direct and indirect labor required per unit of output using the Leontief inverse of national input-output tables, then correlates these against actual market prices. Cockshott and Cottrell (1997, Cambridge Journal of Economics) found a correlation of r = 0.977 across UK industrial sectors and tested labor against alternative “value bases,” including oil (r = 0.799), electricity (r = 0.826), and iron/steel (r = 0.576), demonstrating that labor’s predictive superiority is not an artifact of scale. The coefficient of variation of price-to-value ratios was just 0.198 for labor, versus 3.69–11.41 for the alternatives. An earlier study by Cockshott, Cottrell, and Michaelson (1995, Capital and Class) found r = 0.98 using 1984 UK data, dropping to r = 0.96 after correcting for inter-industry wage differentials.
Shaikh’s foundational analysis of the US economy (in Bellofiore (ed.), Marxian Economics: A Reappraisal, Macmillan, 1998), using data from Ochoa’s seventy-one-sector input-output model (PhD dissertation, New School for Social Research, 1984), reported mean absolute weighted deviations (MAWD) of just 9.2% between labor values and market prices. Ochoa’s own 1989 Cambridge Journal of Economics paper found 12–14% MAWD across multiple postwar years. Similar results have been found in Yugoslavia (Petrović, 1987), Greece (Tsoulfidis and Maniatis, 2002), Japan (Tsoulfidis, 2008), eighteen OECD countries (Zachariah, 2006), and Germany (Fröhlich, 2013). Most comprehensively, Işıkara and Mokre (2022, Review of Political Economy) analyzed forty-two countries over the period 2000–2017 using the World Input-Output Database, finding only small and stable deviations across more than 36,000 price vectors, the largest empirical application of its kind. Deviations between market prices and labor values fell in the range of 10–20 percent across nearly all countries in the sample.
Objections to these correlations have been addressed. Andrew Kliman’s 2002 Cambridge Journal of Economics critique argued the correlations are spurious, driven by industry size rather than any genuine price-value relationship. Nitzan and Bichler (2009) raised a related circularity objection: studies use monetary wage bills as proxies for labor, hence “correlate prices with prices.” Cockshott, Cottrell, and Valle Baeza responded in a 2014 Investigación Económica paper with three rebuttals. First, the alternative value bases test already refutes the scale artifact. If industry size drove correlations, oil and electricity should perform comparably to labor, but they do not. Second, Valle Baeza (2010) demonstrated that the proposed “per-unit” correction violates dimensional analysis, because one cannot meaningfully average the price of a barrel of oil with the price of a pencil across different physical units. Third, they proved mathematically that labor values calculated from monetary input-output tables are invariant (up to a scalar) to the price vector used, eliminating the circularity problem. Zachariah’s studies using Swedish input-output data with actual labor hours (person-years, not wage-bill proxies) confirmed results consistent with all other studies, directly addressing the circularity objection.
Vaona (2014) found weaker support using panel data methods across ten OECD countries. Mainstream economists observe that strong labor-price correlations are also consistent with simple cost-markup pricing theories, since labor is the largest cost component in most industries. Nevertheless, labor time remains an exceptionally strong predictor of relative prices spanning decades, methodologies, and dozens of countries, and stronger than any alternative value base that has been tested.
Why the Regularity Holds: Statistical Mechanics
Econophysics explains why labor values predict prices. The derivation rests on the same mathematics that governs physical systems.
Drăgulescu and Yakovenko (2000, European Physical Journal B) demonstrated through agent-based simulations that when many agents randomly exchange conserved money, the equilibrium distribution converges to an exponential Boltzmann-Gibbs form, P(m) ∝ exp(−m/T), where T = M/N is the average money per agent, the economic “temperature”. This is the same distribution governing energy among molecules in a gas. The result holds regardless of specific exchange rules, requiring only two ingredients: conservation of money in transactions and a large number of interacting agents.
Ian Wright extended this framework to production. In agent-based models where “zero-intelligence” agents trade randomly subject to budget and production constraints, market prices gravitate toward natural prices proportional to labor values. Correlations are r = 0.96 in ten-commodity economies (Wright 2008, Review of Political Economy; developed further in Classical Econophysics, Routledge, 2009). No agent calculates labor values. The system-level outcome emerges from conservation constraints and the law of large numbers, just as the Boltzmann distribution of molecular energies emerges from energy conservation and large particle numbers without any molecule “choosing” its energy level.
Wright drew three conclusions. First, the labor value of a commodity functions as an attractor for its market price. Second, market prices function as error signals that allocate available social labor between sectors of production. Third, the tendency of prices to approach labor values is the monetary expression of a simple commodity economy’s tendency to allocate social labor efficiently. The constant of proportionality between labor values and equilibrium prices is the Monetary Expression of Labour-time (MELT). The ratio of the aggregate rate of money exchange to the aggregate rate of labor-time exchanged in the form of commodities.
Wright’s “implicit microfoundations” approach (2009, Economics: The Open-Access, Open-Assessment E-Journal) formalized the methodological principle. In statistical mechanics, knowledge of each molecule’s trajectory is not needed to predict gas laws. The “particle nature” of economic agents (many, weakly coordinated) dominates their “mechanical nature” (specific optimization behavior). From conservation constraints alone, his model generates the same statistical signatures observed in real capitalist economies: an exponential distribution of individual wage income with a Pareto tail for capital income (reproducing the two-class structure identified empirically by Yakovenko), a power-law distribution of firm sizes, a Laplace distribution of growth rates, an exponential distribution of recession durations, and a gamma-like distribution of profit rates. All of these emerge without any assumption about individual optimization.
The derivation chain runs: conservation law → maximum entropy principle → Boltzmann-Gibbs equilibrium → labor values as statistical attractors for prices.
The Class Structure as Statistical Regularity
Separate empirical work confirmed the class structure these models predict. Analysis of US tax data showed the lower approximately 97% of the income distribution follows an exponential law (additive wage income) while the upper approximately 3% follows a Pareto power law (multiplicative capital income). Wright’s agent-based models, which derive from production-and-exchange dynamics rather than empirical curve-fitting, independently reproduce this same two-class structure: an exponential bulk generated by conservative additive exchange and a Pareto tail generated by multiplicative capital returns. The two-class structure has a precise physical interpretation. The exponential (thermal) class corresponds to conservative money exchange in transactions. The Pareto (superthermal) class corresponds to compounding capital-gains income. The definitive review appeared in Yakovenko and Rosser’s 2009 Reviews of Modern Physics paper (81: 1703–1725). Ludwig and Yakovenko (2022) verified this two-class structure across thirty-six years of US data (1983–2018), and Tao et al. (2019) across sixty-seven countries.
The significance of this finding is that the class structure of capitalism is not a contingent historical feature that could be reformed away with the right policies. It is a statistical regularity arising from the same conservation laws that govern physical systems. The exponential distribution of the lower class is the maximum entropy distribution subject to a conservation constraint on money. It is the overwhelmingly most probable macrostate. The Pareto tail of the upper class arises from a different mechanism: multiplicative returns on capital, where income changes are proportional to existing holdings rather than additive. The boundary between the two classes, at approximately the 97th percentile, is remarkably stable across time and across countries with very different political and institutional arrangements. This does not mean the class structure cannot be changed. It means that changing it requires altering the underlying economic relations (conservation constraints, the mode of exchange, the social relations of production). Simply adjusting tax rates or regulatory frameworks within a system whose fundamental dynamics remain capitalist is insufficient.
Money Creation as Redistribution
The conservation-law framework has a direct bearing on how contemporary money creation is understood. If value is conserved in exchange, and if money functions as a claim on the products of social labor, then the process by which money enters the economy cannot create new value. It can only redistribute existing claims.
The Bank of England’s 2014 Quarterly Bulletin paper by McLeay, Radia, and Thomas confirmed what heterodox economists had long argued that the majority of money in the modern economy is created by commercial banks making loans. When a bank approves a mortgage or a business loan, it does not lend out pre-existing deposits. It credits the borrower’s account with new deposits, simultaneously creating an asset (the loan) and a liability (the deposit) on its balance sheet. The money supply expands with every new loan and contracts as loans are repaid. This is now the mainstream understanding of money creation, endorsed by central banks including the Bundesbank (2017) and the Swiss National Bank (Jordan, 2018). What banks create is private money backed by the borrower’s obligation to repay, a transformation of an illiquid claim (the borrower’s future income) into a liquid one (a bank deposit), not a conjuring of value from nothing. For the conservation-law framework the implication is that each loan creates a pair of offsetting ledger entries, a loan asset and a deposit liability. This is not a net addition to real wealth.
From the standpoint of econophysics, this process does not violate conservation at the level of real value, because what banks create are nominal claims, not real goods or labor. The total stock of commodities available for purchase at any moment is determined by the productive economy: by the labor, materials, and capital goods currently deployed. When new money enters circulation through bank lending, the borrower gains purchasing power, an immediate claim on real goods and services. But that purchasing power is not created ex nihilo. It is transferred from other holders of money whose existing holdings now represent a smaller share of total nominal claims on the same stock of real output. The conservation law operates at the level of real appropriation, the actual command over labor and its products, even as the nominal money supply fluctuates.
Cockshott and Zachariah formalize this in their 2014 paper “Conservation Laws, Financial Entropy and the Eurozone Crisis” (Economics: The Open-Access, Open-Assessment E-Journal, Vol. 8, 2014-5) by distinguishing real appropriation from symbolic appropriation. Real appropriation is the acquisition of actual goods, services, and labor-time. Symbolic appropriation is the acquisition of money tokens and financial claims. Credit creation is a mechanism of symbolic appropriation that enables real appropriation by redistributing command over resources. A firm that obtains a bank loan and uses it to purchase machinery or hire workers has appropriated real resources. The seller of the machinery and the workers receive money tokens. But the total quantity of real goods and labor available in the economy has not changed as a result of the loan being issued. What has changed is who commands those resources. Cockshott and Zachariah further argue that a country running a persistent trade deficit is engaging in real appropriation of the surplus product of others while providing symbolic appropriation (money tokens, credit) in return, and that the accumulated debt this generates is the mirror image of the accumulated real wealth transferred. Domestically, the banking system’s power to create credit is a power to redistribute real command over resources. It does not expand the total stock of resources available.
The eighteenth-century Irish-French banker and economist Richard Cantillon first identified this redistributive dynamic. When new money enters an economy, it does not affect all agents simultaneously. The first recipients, those closest to the point of monetary injection, can spend before prices adjust to reflect the increased money supply. Later recipients face higher prices without having benefited from the initial spending. This pattern, now called the Cantillon effect, applies to every form of money creation: gold discoveries, bank lending, and central bank asset purchases alike. The effect is a redistribution of purchasing power from later recipients to earlier ones, mediated by the uneven diffusion of new money through the price system. The term “Cantillon effect” was popularized by Mark Blaug in his history of economic thought, though the phenomenon Cantillon described had been recognized under various names since the eighteenth century. The core observation is that money is non-neutral in its distributional consequences regardless of the specific monetary regime in operation.
In the contemporary economy this dynamic operates through two principal channels. The first is commercial bank lending. As confirmed by the Bank of England and the credit creation literature (Werner, 2014, International Review of Financial Analysis; Ryan-Collins, Greenham, Werner, and Jackson, 2011, Where Does Money Come From?), banks direct the majority of their lending not toward productive enterprise but toward the purchase of existing assets, particularly real estate and financial securities. Shaxson (2018, The Finance Curse) found that only approximately 25 percent of UK bank lending finances productive economic activity; the remaining 75 percent finances purchases of financial assets and property. This lending pattern generates what Bossone (2021, Post Keynesian Economics Society Working Paper) identifies as commercial bank seigniorage. Because banks create their own funding through deposit creation at near-zero marginal cost, they extract a rent analogous to the seigniorage traditionally associated with sovereign money issuance. Macfarlane, Ryan-Collins, Bjerg, Nielsen, and McCann (2017, New Economics Foundation) estimated that UK commercial banks earned approximately £23 billion per year in seigniorage from credit creation during the period 1998–2016, compared to approximately £1.2 billion per year in state seigniorage from banknote issuance. When credit flows predominantly into asset markets, the effect is to inflate the prices of assets already held disproportionately by the wealthy. The mechanism is simple. New money bids up the price of housing and equities without a corresponding increase in the real output of the economy. The result is a transfer of real purchasing power from non-asset-holders (who face higher prices for housing, goods, and services) to asset-holders (whose nominal wealth increases).
The second channel is central bank intervention, particularly the large-scale asset purchase programs known as quantitative easing (QE). Following the 2008 financial crisis, the Federal Reserve, the European Central Bank, the Bank of England, and the Bank of Japan collectively created trillions in new reserves to purchase government bonds, mortgage-backed securities, and in some cases corporate bonds and equities from private financial institutions. The stated aim was to lower long-term interest rates, restore bank lending, and stimulate a wealth effect through rising asset prices. The redistributive consequences were extensively documented: asset price inflation benefited primarily those who already held financial assets (Montecino and Epstein, 2015), while non-asset-holders experienced the consequences of rising housing costs and stagnant real wages. The Bank for International Settlements acknowledged in its 2021 Annual Report that every change in monetary policy stance inevitably redistributes interest income between debtors and creditors and reallocates wealth depending on asset holdings. Research reviewed by Positive Money EU (2024) synthesizing approximately forty empirical studies found that expansionary monetary policy has, in general, increased wealth inequality, because the wealth increase due to stock price appreciation far outweighs valuation gains in real estate, even in jurisdictions where homeownership is broadly distributed.
The econophysics perspective situates these observations within the conservation-law framework. The Drăgulescu-Yakovenko result, that random conservative exchange among many agents produces an exponential Boltzmann-Gibbs distribution, depends on a constraint. Money is conserved in transactions. When credit creation expands the nominal money supply without a corresponding expansion in real output, it does not create a new conservation constraint. It alters the distribution of claims on the same conserved quantity of real value. The exponential (thermal) class, whose income derives from additive wage transactions, absorbs the inflationary cost of monetary expansion as a dilution of purchasing power. The Pareto (superthermal) class, whose income derives from multiplicative capital returns, captures the gains from asset price inflation. The two-class structure identified by Yakovenko, Ludwig, and Tao et al. is therefore not merely reproduced but actively reinforced by the mechanisms of contemporary money creation. Credit expansion and quantitative easing do not alter the fundamental conservation constraint; they redistribute within it, systematically transferring real command over resources from the wage-earning majority to the asset-holding minority.
The class division between the exponential and Pareto distributions is therefore not a policy failure correctable by monetary reform alone. It is a structural consequence of the rules governing exchange and accumulation. Changing the monetary regime, whether by moving to sovereign money creation, full-reserve banking, or some other arrangement, would alter the channels through which redistribution occurs, but would not by itself abolish the class structure so long as the underlying relations of production and the multiplicative logic of capital accumulation remain intact.
The Falling Rate of Profit
The tension between conservation in exchange and compound accumulation plays out over historical time. Wright’s agent-based model in Classical Econophysics produces a gamma-like distribution of profit rates across firms, with time-varying parameters indicating a long-term downward trajectory. Farjoun and Machover had proposed this reframing in Laws of Chaos (Verso, 1983). The rate of profit is not a single number tending toward equilibrium but a probability distribution whose parameters shift over time. The dispersion of profit rates across firms and sectors is not noise to be averaged away but a structural feature of capitalist economies, the economic analogue of the distribution of molecular velocities in a gas.
The most comprehensive dataset is Esteban Maito’s (2014, Universidad de Buenos Aires), covering fourteen countries back to 1870. Maito found a clear secular downward trend in core capitalist economies. From approximately 40%+ in the late nineteenth century to roughly 14.6% at the cyclical peak preceding the 2008 financial crisis. Even at the top of the business cycle, after the neoliberal partial recovery, the rate was barely a third of its nineteenth-century level. The decline was very regular from the 1870s through the 1970s. The rate of profit in core countries was consistently lower than in peripheral ones, though the latter experienced steeper postwar declines. Basu, Huato, Jauregui, and Wasner (2022), using the Extended Penn World Tables for a global calculation, found the world rate of profit declined approximately 25% between 1960 and 2019, driven primarily by a falling output-capital ratio (~0.8% per year decline) that outpaced a rising profit share (~0.25% per year increase). Basu and Manolakos (2013) found weak evidence of a long-run downward trend in the US rate of profit, declining at approximately 0.3% per year over 1948–2007. Carchedi and Roberts compiled a global analysis in World in Crisis (Haymarket Books, 2018), examining profit rate data across multiple economies and finding the pattern broadly consistent with Marx’s law, including the cyclical relationship between profit rate troughs and the onset of major recessions. Cockshott and Zachariah (2014) showed that a fundamental structural tension exists between money conservation in transactions and exponential capital accumulation, the same tension that generates periodic crises and has historically driven the evolution of monetary forms, from gold to banknotes to electronic credit.
There is a methodological dispute concerning historical cost versus replacement cost measurement of capital stock. Andrew Kliman, using historical-cost measures consistent with the Temporal Single System Interpretation, argues the US profit rate never sustainably recovered after the post-1960s decline. Duménil and Lévy, using replacement-cost measures, find a genuine but incomplete partial recovery; by 2000, the profit rate stood at roughly half its 1948 value. Robert Brenner accepts falling profits but rejects Marx’s explanation (rising organic composition), proposing instead that intensified international competition caused persistent overcapacity. The econophysics approach is agnostic on these interpretive disputes. It treats the profit rate as a distributional phenomenon whose long-run trajectory is consistent with the tendency Marx described, without requiring his specific causal mechanism.
Michael Heinrich’s 1996 Science & Society article, comparing Marx’s 1864–65 manuscript (published in MEGA in 1993) with Engels’s 1894 edition of Capital Volume III, documented extensive editorial modifications: Engels transformed Marx’s seven chapters with thirty-four headings into seven parts with fifty-two chapters and ninety-two headings, made unmarked insertions, and deleted whole paragraphs. The famous passage about “the poverty of the masses as the ultimate reason for all real crisis” was in brackets in Marx’s manuscript; Engels integrated it unmarked into the main text. Heinrich’s 2013 Monthly Review article pushed further, claiming the tendency of the rate of profit to fall (TRPF) is logically indeterminate since rising organic composition and rising rate of surplus value both result from the same process (increasing productivity). Kliman, Freeman, Potts, Gusev, and Cooney (2013) responded that Heinrich confuses the law with a mechanical prediction. Carchedi and Roberts (2013) argued Heinrich misidentifies the rate of surplus value as part of the law rather than as a counter-tendency. A 2024 Capital & Class article by Christos Balomenos, examining MEGA archival materials directly, concluded that Engels’s editing, despite significant interventions, remains faithful to Marx’s analysis.
The econophysics reframing partially dissolves these debates. The classical argument asked whether a single variable must fall given rising organic composition. The statistical-mechanical argument asks whether the parameters of a distribution shift in a way consistent with the empirical record. The data suggest they do. The tension between money conservation in transactions and the compound accumulation of capital is a structural feature of the system rather than a deduction from a single algebraic identity.
What This Establishes
These findings establish four things for the purpose of this proposal. First, labor values reliably predict prices, a democratic body that directs production according to socially necessary labor time is not operating blindly. Second, the class structure of capitalism is a statistical regularity arising from the same conservation laws that govern physical systems, and it is amenable to scientific analysis and intervention. Third, contemporary money creation, whether through commercial bank lending or central bank asset purchases, does not create new value but redistributes existing claims on real output, systematically reinforcing the two-class structure by channeling newly created purchasing power toward asset-holders at the expense of wage-earners. Fourth, the rate of profit tends to fall over the long run, generating crises that destroy productive capacity while human needs go unmet. The question is what to do about it.