Research Proposal: Smart Contracts and Governance of Decentralised Autonomous Organisations

Tags: •

Savva Kerdemelidis, Amnon H. Eden

Potential funding sources


  • The DAO is a venture fund set up using the Ethereum cryptocurrency programming language (Solidity), which allows the programming of “smart contracts” to execute transactions
  • The legal terms of The DAO specify that, in case of dispute, the code provides the definitive versions
    • The DAO is designed to maximize: objective, precise, explicit, transparent, and not subject to human whims
    • Purists believe that the DAO is defined by its source code
    • It follows that smart contracts must not be rolled back (the blockchain is immutable), even in the case of an exploit
  • However, the source code does not capture the “intent”. Therefore there is no distinction between “exploits” and “features” (Vitalik Buterin: ‘between “intent” and “implementation”‘)
    • A recent hack into the DAO (siphoning approx. $50M). Since the source code allowed it, there is no encoding of a solution.
    • More mundanely, there is a need to roll back transactions that violate intent, such as chargeback due to faulty goods
  • On the other hand, pragmatics require that smart contracts should execute the parties’ intent to be useful, and that exploits and other forms of fraud should roll back
    • A “fix” would be soft and hard fork, which requires consent of 51% of the miners, and is contrary to the idea of immutability of the blockchain and is very disruptive to the community
  • Therefore: the need to arbitrate disputes
  • At this stage it is unclear how that would be done.  AI is an option, but that would be an exceptionally novel application of artificial intelligence, which requires serious examination.  Hence the research.


  1. Can we formulate the distinction between exploit and feature in legal terms? As a “smart contract”?
    • That is, can it be defined in the Solidity programming language?
    • How precisely are the semantics of Solidity defined, exactly? Are there more formal ways to express it?
  2. What sort of governance/oversight by people is acceptable? By the community?
  3. Could AI help solve these issues?
    • Could AI increase the resilience of the system against hacks/exploits?
    • Could AI could help determine the intent of a contract?
    • Would it be better if AI “governs” (within a specific mandate)? Or would this be at risk of an exploit?
    • (Assuming a machine learning-type of AI:) Should the AI be self-correcting? Self-improving?


  • Buterin, V., 2016. Thinking About Smart Contract Security. Ethereum Blog. Available at: https://blog.ethereum.org/2016/06/19/thinking-smart-contract-security/ [Accessed June 21, 2016].
  • Daian, P., 2016. Analysis of the DAO exploit. Hacking Distributed. Available at: http://hackingdistributed.com/2016/06/18/analysis-of-the-dao-exploit/ [Accessed June 22, 2016].
  • DAOhub.org, 2015a. Manifesto: The DAO’s Operating Guidelines. Available at: https://daohub.org/manifesto.html [Accessed June 22, 2016].
  • DAOhub.org, 2015b. The DAO – Explanation of Terms and Disclaimer<. Available at: https://daohub.org/explainer.html [Accessed June 22, 2016].
  • Ethereum.org, 2015. Decentralized Autonomous Organization: Create a Democracy contract in Ethereum. Available at: https://www.ethereum.org/dao [Accessed June 22, 2016].

Related posts:

Should Facebook be broken up? The lesson of Policy Vacuums

Facebook has recently revised their privacy policy by virtually admitting to errors made in the 2016 presidential elections campaign. But that’s clearly not enough. Is legislation required? Is government intervention appropriate? Should Facebook be broken up? What about other information giants? Here to help us answer this question is the term ‘policy vacuum’ introduced in 2005 by Jim Moor, a professor of ethics in Dartmouth College and celebrated technology ethicist.

Leave a Reply