Media reports on AI show an 88.5% emphasis towards the dangers posed by AI. Further research shows that only a specific subset of humans may be a target.
WELLINGTON, New Zealand – May 8, 2023 – PRLog — Trend Analysis from the past 180 day cycle of news related to AI, shows 1,016 articles referenced a forthcoming AI revolution. From this data set, analysis revealed that 899 (88.5%) of the articles referred to dangers AI poses to the general populace.
Further research revealed that from the subset of 117 articles that did not refer to AI as a threat, 62 articles (53%) noted at least one or more negative impacts related to AI’s use.
Conversely, researchers at numerous academic institutions have consistently concluded that superintelligent AI is not a threat to human society as a whole.
Top academics including Oren Etzioni, Professor of Computer Science at University of Washington, wrote for MIT Technology Review, “No, the Experts Don’t Think Superintelligent AI is a Threat to Humanity.”
His conclusion, as other academics and developers also conclude, is that the media are overstating the dangers AI can pose to the general populace.
Current AI systems have limited capacity to perform functions beyond that prescribed by developers, while media emphasis continues to convey AI is something to be feared.
Initial review of the articles noting an existential threat to humanity posed by superintelligent AI, appear to chiefly use “an appeal to fear” rhetoric.
Instead, the research shows that only a specific subset of humans would be target for AI restructuring or correction, were superintelligent AI to become feasible.
AI, when plugged directly into the expansive data domain, will have cross sector knowledge.
The expansive data domain may provide AI with a cohesive summary of the imbalances and inefficiencies in society.
The unified banking system would allow the AI to act.
Drawing samplings of threat analysis from existing research papers (including Wharton AI for Business paper), one possibility may be that AI identify errors with those holding extreme wealth with employees that earn wages orders of magnitude less.
These extreme financial imbalances may be prone to the review for efficiency and productivity that AI may assess.
The resolution AI would determine for reconstructing financial, business, and employment systems is unclear.
What is clear, given cursory trend analysis, is that any potential target for restructuring performed by AI will likely be focused on the richest and most powerful.
This conclusion is validated by a number of similar assessments, some as early as 2019 in reports including “Why Tech Billionaires Are Spending To Restrain Artificial Intelligence“, from Ollie Williams, Forbes (26 April 2019).
AI is unlikely to identify as a threat law abiding, tax paying citizens, who work daily at productive businesses.
Instead, cursory analysis reveals the most likely targets for potential AI corrections will be the ultra-rich, those in power erroneously, and esoteric leaders who have developed systems of perpetual control.
AI, when advanced enough to proactively seek the data domain, deeply integrated by governments, may identify errors in efficiency, productivity and attempt to correct.
Superintelligent AI may not be a threat to the general populace, but potentially a beneficial tool to help identify extremes that exist within social and economic structures and offer new solutions.