Why Replacing Government with Blockchain is Not a Good Idea
Automating government functions with smart contracts might seem appealing, but is it truly a wise choice? We explore the reasons why replacing traditional government structures with a blockchain-based system could be problematic and the potential challenges of integrating blockchain technology within government operations.
Why Blockchain Cannot Replace Government Effectively
Blockchain technology promises a transparent ledger system where nothing can be hidden. However, its potential applications go beyond that. Theoretically, to replace a government with a fully automated and autonomous system, its functions would need to be transferred to smart contracts, activated only under specific conditions. This poses several risks. In 2016, Tal Zarsky highlighted the main challenges of substituting the state with an algorithm in his work published in January 2016 in the journal Science, Technology, & Human Values.
According to the researcher, the main issue with replacing the state with an algorithm is its opacity: it is generally unknown how the program makes decisions and what factors it considers. This opacity, intended to prevent human deception, can embed problems of "fairness" and "efficiency" into the system.
An example of the "fairness" dilemma can be seen in the banking sector's automated borrower assessment systems, which use over a thousand variables not disclosed to the public. On rare occasions, these systems can malfunction, leading to decisions that disadvantage individuals. While smart contracts aim to mitigate the "fairness" issue, the "efficiency" problem persists.
Decomposing problems in automating government or commercial sectors | Source: MDPI.com
The decisions made by algorithms possess two characteristics: opacity and automation. Opacity means that only the developers who wrote the code understand the basis of the algorithm's decisions. Automation implies the absence of human intervention, making it impossible to adjust decisions in real-time.
These characteristics divide the issue into "fairness" and "efficiency." According to Zarsky's theory, the "efficiency" challenge arises from attempts to predict and evaluate human behavior using software. For instance, the dataset an algorithm uses might be flawed or contain false information, leading to incorrect decisions. This could result in invalid lawsuits, mistakes in tax calculations, and errors in budget planning.
The "fairness" issue concerns how the algorithm prioritizes its decisions, whom it benefits, and the reasons behind those decisions. Imagine an algorithm deeming your behavior socially unacceptable without clear justification. From your perspective, you're not engaging in objectively harmful actions. Transparency in smart contracts should partially address this issue by making the decision-making process more public.
A third problem related to "fairness" involves a lack of engagement. If automation takes over the majority of decision-making, then the information driving those decisions bypasses the consideration of other policymakers. This disengagement means individuals lose a comprehensive understanding of the situation, essential for evaluating actions and decisions over time.
How can the issues of fairness and efficiency be solved?
Addressing these issues involves a variety of strategies. For example, one method is to host open hackathons aimed at publicly scrutinizing developers' behavior and integrity. Another strategy is the implementation of open code audits that make decisions based on publicly accessible outcomes. Given that decision-making relies on the quality of input data, it's essential to also examine the dataset the algorithm employs.
The challenge of automating city operations through smart contracts | Source: GNCrypto.News
The issue of efficiency lies in predicting human behavior. Due to the complexity of human behavior, it is extremely unpredictable at the algorithmic level. Such inaccuracies detract from the efficiency of an algorithm's judgments. For instance, this may lead to errors in decision-making processes about credit defaults. Normally, negotiations with the borrower are conducted before transitioning the case from a civil to a criminal matter. However, an algorithm might misinterpret the borrower's behavior, leading to a wrongful lawsuit.
Commercial interests and human rights. Some legal fields are in conflict—human rights do not always align with commercial interests. For example, how should an algorithm deal with unsafe working conditions at Amazon warehouses? A similar question arises when assessing the grueling work conditions at Tesla factories with 12-hour shifts. Algorithmization would require prioritizing some laws over others, particularly those concerning the protection of freedoms and human rights, including health. However, such an approach could diverge from the current corporate policies aimed at maximizing profit.
The challenge of fully disclosing uses of personal information. Algorithmization would necessitate providing a complete list of all the ways personal information is used upon the owner's request. This list should include a full range of methods and operations where personal data, documents, certificates, extracts, assets, and securities are utilized. Moreover, it should account for the use of any other personal information, such as DNA—if such data were requested and provided. Such a level of transparency could also pose a risk of external espionage and subsequent interventions.
Differentiating between programmatic and legal errors. After the algorithmization of certain decisions, it will be necessary to track which errors made by the program are false and which are due to human behavior. This system is known as HITL—Human In The Loop, or the principle of building architecture with the participation of both automation and humans. Discrepancies will lead to a disruption of balance, necessitating a filter to decide which errors should lead to an automatic resolution and which are better reviewed by a human.
Incorporating a human checkpoint requires skill level assessment. For the system's operation to adapt, it's essential to understand, for instance, what percentage of lawyers are capable of working with an algorithmic type of governance. If the deployed system is expected to interact with people, the list of working skills will also include the ability to work with smart contracts, or at least a basic understanding of how these tools function. Subsequently, an interface will need to be created, and specialists will have to be trained to use it. This interface should be standardized for a series of operations to simplify its use in the future.
Blockchain Technology: A Double-Edged Sword for Security
Smart contracts are susceptible to hacking, and the move towards full or partial algorithmization necessitates the establishment of regulatory bodies to oversee cyberspace surrounding government data. This issue commonly affects DeFi, P2E, and GameFi projects, with 17 significant breaches occurring between 2021 and 2022, leading to losses of $600 million. Algorithmic errors can infringe upon individual rights and freedoms. Complete automation still demands human oversight and corrective intervention.
Would Hacking a Government Smart Contract Constitute a Crime?
At its core, the idea of an "algorithmic government" reduces to lines of code. Hacking this code for profit should undoubtedly be considered a legal violation. Specifically, it should be treated as a breach of law, as the perpetrator may aim to alter the contract's functionality to gain unauthorized access. A potential solution involves isolating decision-making clusters by tokenizing their foundational functions, ensuring they remain part of the overall blockchain yet are isolated to safeguard data.
The Challenges of Automating Laws Through Smart Contracts | Source: GNCrypto.News
The Need for a New Class of Specialists
This necessitates managers and developers dedicated to sustaining algorithm operations at the software level. The question arises: how should they be integrated, and how will they interact with other participants in the system, such as diplomats, lawmakers, judges, lawyers, etc.? Practically, such an entity's role should be limited to maintaining code functionality and developing new algorithmic solutions. However, in terms of overall system maintenance within a HITL architecture, this body will also need executive authority.
Tokenizing Social Consensuses Increases Hack Risks
Issuing tokens to cement decisions, signify political will, or uphold existing resolutions will transform cyberspace into an additional battleground for influence and protection. Evolutionarily, a confrontation between two algocratic states operating on blockchain-based smart contracts might devolve into a series of hacks. Yet, such actions could have far more devastating consequences if a city's operations and its infrastructure depend on code strings.
Is сreating a "powder keg" with smart contracts wise? On one hand, automation can simplify lives; on the other, it can introduce unbearable complexity. Like any issue, finding a balance is crucial.