The Interdependence of Autonomous Human-Machine Teams: The Entropy of Teams, But Not Individuals, Advances Science
Key concepts: We review interdependence theory measured by entropic forces, findings in support, and several examples from the field to advance a science of autonomous human-machine teams (A-HMTs) with artificial intelligence (AI). While theory is needed for the advent of autonomous HMTs, social theory is predicated on methodological individualism, a statistical and qualitative science that neither generalizes to human teams nor HMTs. Maximum interdependence in human teams is associated with
... performance of the best teams when compared to independent individuals; our research confirmed that the top global oil firms maximize interdependence by minimizing redundant workers, replicated for the top militaries in the world, adding that impaired interdependence is associated with proportionately less freedom, increased corruption, and poorer team performance. We advanced theory by confirming that the maximum interdependence in teams requires intelligence to overcome obstacles to maximum entropy production (MEP; e.g., navigating obstacles while abiding by military rules of engagement requires intelligence). Approach: With a case study, we model as harmonic the long-term oscillations driven by two federal agencies in conflict over closing two high-level radioactive waste tanks, ending when citizens recommended closing the tanks. Results: While contradicting rational consensus theory, our quasi-Nash equilibrium model generates the information for neutrals to decide; it suggests that HMTs should adopt how harmonic oscillations in free societies regulate human autonomy to improve decisions and social welfare.