Avoiding flaws
This chapter lists a number of common flaws in smart contracts.
With each type of flaw, it presents best practices that we invite you to apply when designing or reviewing contracts.
- 1. Using
source
instead ofsender
for authentication - 2. Transferring mav in a call that should benefit others
- 3. Performing unlimited computations
- 4. Needlessly relying on one entity to perform a step
- 5. Trusting signed data without preventing wrongful uses
- 6. Not protecting against bots (BPEV attacks)
- 7. Using unreliable sources of randomness
- 8. Using computations that cause mav overflows
- 9. Contract failures due to rounding issues
- 10. Re-entrancy flaws
- 11. Unsafe use of Oracles
- 12. Forgetting to add an entry point to extract funds
- 13. Calling upgradable contracts
- 14. Misunderstanding the API of a contract
1. Using source
instead of sender
for authentication
Summary:
Using
source
to check who is allowed to call an entry point can be dangerous. Users may be tricked into indirectly calling this entry point and getting it to perform unintended transactions. It is safer to usesender
for that purpose.
Reminder: source
vs sender
Smart contracts have two ways to get information about who made the call to a contract:
source
represents the address of the user who made the original callsender
(orcaller
in some languages) represents the address of the direct caller.
For example, consider the chain of contract calls A → B → C, where a user A calls a contract B, that then calls a contract C. Within contract C, source
will be the address of A, whereas sender
will be the address of B.
Example of vulnerable contract:
The CharityFund
contract below has a donate
entry point that sends some mav to a charity
. To make sure only admin
can initiate donations, this entry point checks that admin
is equal to source
.
CharityFund | |
Storage | Entry points effects |
|
|
Example of attack:
The contract FakeCharity
below, can pass itself as a charity to receive a small donation from CharityFund
. It can then take control of the CharityFund
and transfer a large donation to the account of the attacker.
To do so, it contains a default
entry point, that will be called when CharityFund
makes its donation. Instead of simply accepting the transfer, it initiates a new large donation of 1000 mav to the attacker. When CharityFund
then checks if the call is allowed, it checks the value of source
, which is indeed the admin, since it was the admin who initiated the first call of this whole chain of transactions.
FakeCharity | |
Storage | Entry points effects |
|
|
Best practice
The best way to prevent this type of attack is simply to use sender
(or caller
in some languages), to check the identity of the user who called the contract.
2. Transferring mav in a call that should benefit others
Summary
Sending mav to an address may fail if the address is that of a contract that doesn't accept transfers. This can cause the entire call to fail, which is very problematic if that call is important for other users.
Example of attack
Take the following (partial) contract, which allows users to purchase NFTs, and sends 5% of royalties on each transaction:
NFTSale | |
Storage | Entry points effects |
|
|
The author could be a contract, that may fail if the user that controls it decides to prevent future sale of their NFTs.
Best practice
One idea could be to only allow implicit accounts as the destination of transfers of mav, as implicit accounts may never reject a transfer of mav.
This is possible but not recommended, as it limits the usage of the contract, and for example prevents the use of multi-sigs or DAOs as the authors of NFTs.
A better solution is to avoid directly transferring mav from an entry point that does other operations, and instead, let the destination of that transfer fetch the funds themselves. One generic approach is to include an internal ledger in the contract:
NFTSale (fixed) | |
Storage | Entry points effects |
|
|
3. Performing unlimited computations
Summary
There is a limit to how much computation a call to a smart contract may perform, expressed in terms of a gas consumption limit per operation. Any call to a contract that exceeds this limit will simply fail. If an attacker has a way to increase this amount of computation, for example by adding data to a list that the contract will iterate through, they could prevent future calls to this contract, and in doing so, locking all funds in the contract.
Example of attack
Take for example this smart contract, that allows donors to lock some funds, with an associated deadline. The owner may claim any funds that have an expired deadline:
TimeLock | |
Storage | Entry points effects |
|
|
Anyone could attack this contract by calling the lockAmount
entry point with 0 mav many times, to add so many entries into the lockedAmount
list, that simply looping through the entries would consume too much gas.
From then on, all the funds would be forever locked into the contract.
Even simply loading the list into memory and deserializing the data at the beginning of the call could use so much gas that any call to the contract would fail.
The same attack could happen with any kind of data that can grow indefinitely, including:
- integers and nats, as they can be increased to an arbitrary large value
- strings, as there is no limit on their lengths
- lists, sets, maps, that can contain an arbitrary number of items
Best practice
There are three different ways to avoid this type of issue:
1. Prevent data from growing too much:
- make it expensive, by requesting a minimum deposit for each call that increases the size of the stored data.
- set a hard limit, by rejecting any call that increases the size of the stored data beyond a given limit.
Here is a version of the contract with these two fixes:
TimeLock (fixed) | |
Storage | Entry points effects |
|
|
2. Store data in a big-map
Unless it's already in the cache, all data in the storage of a contract needs to be loaded and deserialized when the contract is called, and reserialized afterwards. Long lists, sets or maps therefore can increase gas consumption very significantly, to the point that it exceeds the limit per operation.
Big-maps are the exception: instead of being deserialized/reserialized entirely for every call, only the entries that are read or written are deserialized, on demand, during the execution of the contract.
Using big-map allows contracts to store unlimited data. The only practical limit is the time and costs of adding new entries.
This is, of course, only useful if the contract only loads a small subset of entries during a call.
3. Do computations off-chain
The following is an approach you should always try to apply when designing your contracts:
Avoid doing computations on-chain, if they can be done off-chain. Pass the results as parameters of the contract, and has the contract check the validity of the computation.
In our example, we could store all the data about locked funds in a big-map. The key could simply be an integer that increments every time we add an entry.
Whenever the user wants to claim funds with expired deadlines, do the computation off-chain, and send a list of keys for entries that have an expired deadline and significant funding.
Here is our example contract fixed using this and the previous approach:
Timelock (fixed) | |
Storage | Entry points effects |
|
|
With this approach, the caller has full control on the list of entries sent to claimExpired and on its size, so there is no risk of getting the contract locked.
4. Needlessly relying on one entity to perform a step
Summary
Relying on one entity involved in a contract to perform a step that shouldn't require that entity's approval, breaks the trustless benefits that a smart contract is intended to provide. In some cases, it may cause funds to get locked in the contract, for example if this entity becomes permanently unavailable.
Question: can you find the flaw in this version of an NFT Auction contract?
Auction | |
Storage | Entry points effects |
|
|
Answer: the issue with this contract is that the claimNFT
entry point only allows the topBidder to call it. If this user never calls the entry point, not only the amount they paid stays stuck in the contract and the seller never receives the funds, but the NFT also stays stuck in the contract. This is a pure loss for the seller.
The top bidder has a strong incentive to call claimNFT
, as they have already paid for the NFT and benefit from the call by getting the NFT they paid for. However, they may simply have disappeared somehow, lost access to their private keys, or simply forgot about the auction. Worse, as they have full control on whether the seller gets the funds or not, they could use this as leverage on the seller, to try to extort some extra funds from them.
Requiring for the seller themselves to call the entry point instead of the topBidder would lead to a similar issue.
In this case, the solution is simply to allow anyone to call the entry point, and make sure the token is transferred to topBidder, no matter who the caller is. There is no need to have any restriction on who may call this entry point.
Best practice
When reviewing a contract, go through every entry point and ask: "What if someone doesn't call it?"
If something bad would happen, consider these approaches to reduce the risk:
Make it so that other people can call the entry point, either by letting anyone call it, if it is safe, or by having the caller be a multi-sig contract, where only a subset of the people behind that multi-sig need to be available for the call to be made.
Add financial incentives, with a deposit from the expected entity to call the entrypoint, that they get back when they make the call. This reduces the risk that they simply decide not to call it, or forget to do so.
- Add a deadline that allows the other parties to get out of the deal, if one party doesn't do their part in time. Be careful, as in some cases, giving people a way to get out of the deal may make the situation worse.
5. Trusting signed data without preventing wrongful uses
Summary
Using a signed message from an off-chain entity as a way to ensure that this entity agrees to a certain operation, or certifies the validity of some data, can be dangerous. Make sure you prevent this signed message from being used in a different context than the one intended by the signer.
Example of attack
Let's say that off-chain, Alice cryptographically signs a message that says "I, Alice, agree to transfer 100 tokens to Bob", and that Bob can then call a contract that accepts such a message, and does transfer tokens from Alice to him.
Ledger | |
Storage | Entry points effects |
|
|
Bob could steal tokens from Alice in two different ways:
- Bob could call the contract several times with the same message, causing multiple transfers of 100 tokens from Alice to him.
- Bob may send the same message to another similar contract, and steal 100 of Alice's tokens from that other contract.
Best practice
To make sure the message is meant for this contract, simply include the address of the contract in the signed message.
Preventing replays is a bit more complex, and the solution may depend on the specific situation:
- Maintain a counter for each potential signer in the contract, and include the current value of that counter in the next message. Increment the counter when the message is received.
Ledger (fixed) | |
Storage | Entry points effects |
|
|
This is the approach used in the Mavryk protocol itself, for preventing replays of native transactions. However, it prevents sending multiple messages from the same signer within a short period. This could be quite inconvenient for the present use case, as the whole point of signed messages, is that they can be prepared off-chain, and used much later.
- Including a unique arbitrary value in the message: the contract could then keep track of which unique values have already been used. The only downside is the cost of the extra storage required.
Ledger (fixed) | |
Storage | Entry points effects |
|
|
6. Not protecting against bots (BPEV attacks)
Summary
On a blockchain, all transactions, including calls to smart contracts, transit publicly on the P2P gossip network, before a block producer includes some of them in a new block, in the order of their choosing. In the absence of protection measures such as commit and reveal schemes and time locks, some of these contract calls may contain information that can be intercepted and used by bots, to the disadvantage of the original caller, and often, of the health of the blockchain itself.
Example of attack
Let's consider a smart contract for a geocaching game, where users get rewarded with some mav if they are the first to find hidden capsules placed in a number of physical locations. The contract contains the hash of each of these passwords. When a user calls the contract with a password that has never been found before, they receive a reward.
Geocaching | |
Storage | Entry points effects |
|
|
A bot may listen to the gossip network and notice the call to claimReward, along with the password, before it is included in the next block.
This bot may simulate performing the same transaction with itself as the caller, and find out that this results in it receiving a reward. It can do so automatically, without knowing anything about the contract.
All it then has to do is to inject this new transaction, using a higher fee than the original caller, so that the baker of the next block includes it earlier in that block. Indeed, bakers usually place transactions with high fees earlier in their block. If successful, the bot gets the reward, while the original caller who did all the work of finding the capsule, gets nothing.
In the end, as multiple bots are likely to compete for this reward, they will engage in a bidding war over this transaction. The baker itself ends up getting most of the benefits from this attack, as it collects the increased fees. For this reason, this type of attack is part of what we call Block Producer Extractable Value (BPEV).
Overall, the existence of such attacks has a negative impact on the health of the blockchain, as they eliminate the incentive for doing the actual work of finding opportunities. The bidding wars generate unnecessary transactions that slow the network while increasing the fees callers have to pay for their legitimate transactions to get included in the next block.
Other types of bots attacks and BPEV
There are a number of different ways bots can take advantage of the fact that calls to smart contracts are publicly visible on the gossip network before they are included in a block.
Furthermore, block producers can take advantage of the fact that they have significant control on the content of the block: which transactions get included and in which order, as well as what will be the precise value for the timestamp of the next block.
A lot of these attacks take place in the field of DeFi (Decentralized Finance), where the value of assets changes all the time, including within a single block.
Copying a transaction is the easiest attack, as is done in our example. The most common such situation is the case of arbitrage opportunities. Consider, for example, two separate DEXes (Decentralized EXchanges), that temporarily offer a different price for a given token. An arbitrage opportunity exists as one may simply buy tokens on the DEX with the cheaper price, and resell them for more on the other DEX. A user who carefully analyzes the blockchain and detects such an opportunity may get this opportunity (and all their work to detect it) snatched from them by a bot that simply copies their transaction.
Injecting an earlier transaction before the attacked transaction. For example, if a user injects a transaction to purchase an asset at any price within a certain range, a bot could insert another transaction before it, that purchases the asset at the low end of that range, then sells it to this user at the high end of that range, pocketing the difference.
Injecting a later transaction right after the target transaction. For example, as soon as an arbitrage opportunity is created by another transaction. This isn't an attack against the target transaction, but the potentially numerous generated transactions produced by bots to try to fight for the right spot in the sequence of transactions, may cause delays in the network, or unfairly benefit the block producer, who gets to decide in which order transactions are performed within the block.
Sandwich attacks, where a purchase transaction is injected to manipulate the price of an asset before the target transaction occurs, and a later sale transaction is injected to sell the asset with a profit, at the expense of the target transaction.
More complex schemes that manipulate the order of transactions to maximize profits can be designed, all at the cost of healthier and legitimate uses.
Best practice
Preventing this type of attack is not easy, but part of the solution, is to use a commit and reveal scheme.
This scheme involves two steps:
Commit: the user sends a hash of the information they intend to send in the next step, without revealing that information yet. The information hashed should include the user's address, to prevent another user (or bot) from simply sending the same commit message.
Reveal: once the commit call has been included in a block, the user then sends the actual message. The contract can then verify that this message corresponds to the commit sent in the previous step, and that the caller is the one announced in that message.
Using this approach is sufficient to fix our example:
- The commit message sent by the user who found the capsule may simply be a hash of a pair containing the password and the address of the caller.
- The reveal call will simply send the password.
Here is the fixed contract:
Geocaching (fixed) | |
Storage | Entry points effects |
|
|
Using financial incentives and Timelock for extra protection
Our geocaching example is a straightforward case where the situation is binary: either a user found a password, or it didn't.
Other situations may be more complex, and attackers may generate a number of commitments for different messages, in the hope that once they collect information during the reveal phase, revealing one of them will be beneficial to them.
This could be as simple as one commitment that bets that the value of an asset will increase, and another that bets that the value will decrease. Depending on what happens when other users reveal their own messages that may affect the price of this asset, the attacker may decide to reveal one or the other message.
To protect against such attack, financial incentives can be used, where users have to send a deposit along with each commitment. Users who never reveal the message corresponding to their previous commit lose their deposit.
Furthermore, the TimeLock cryptographic primitive may be used instead of a hash for the commit phase. This approach allows anyone to decrypt the content of the commit, given enough time, therefore forcing the reveal of the committed value.
For more information, check Nomadic Lab's Blog Post on this topic.
7. Using unreliable sources of randomness
Summary
Picking a random value during a contract call, for example for selecting the winner of a lottery, or fairly assigning newly minted NFTs to participants of a pre-sale, is far from being easy. Indeed, as any contract call must run on hundreds of nodes and produce the exact same result on each node, everything needs to be predictable. Most ways to generate randomness have flaws and can be taken advantage of.
Examples of bad sources of randomness
The timestamp of a block. Using the current timestamp of the local computer is a commonly used source of randomness on non-blockchain software, as the value is never the same. However, its use is not recommended at all in security sensitive situations, as it only offers a few digits of hard-to-predict randomness, with precision in microseconds or even milliseconds.
On the blokchain, it's a very bad idea, for multiple reasons:
- the precision of the timestamp of a block is only in seconds
- the value can be reasonably well predicted, as bakers often take a similar time to produce their block
- the baker of the previous block can manipulate the timestamp it sends, therefore controlling the exact outcome.
The value of a new contract's address. A contract may deploy a new contract and obtain its address. Unfortunately, a contract address is far from being as random as it looks. It is simply computed based on the operation group hash and an origination index (starting from 0 which is increased for every origination operation in the group). It can therefore be easily manipulated by the creator of the operation which is no better than trusting them.
The exchange rate between currencies. One may consider using an off-chain Oracle to obtain the exchange rate between two common currencies, such as between the USD and Euro, and use it to get a few bits of entropy. After all, anyone would only dream of predicting the value of such exchange rates, so it might as well be considered random. There are, however, a number of issues with this approach:
- We can only get a few bits of entropy (randomness), which is usually insufficient.
- One of the entities behind the off-chain Oracle could influence the exact value. The exact way to do this depends on the specifics of the Oracle, but it's likely that there is a way to do so.
- A baker could also censor some of the transactions involved in the off-chain Oracle, and by doing so, influence the exact value as well.
A bad off-chain randomness Oracle. Anyone can create an off-chain Oracle, and claim that this Oracle provides secure random values. Unfortunately, generating a random value off-chain in a reliable way, so that no single entity may influence or predict the outcome, is extremely hard. Don't blindly trust an Oracle, even if you find that many contracts use it already. A bad random Oracle may be the worst choice, as it could simply stop working and make your contract fail, or be under the control of a single person who then gains full control over all the outcomes of your contract that rely on it.
The hash of another source of randomness. Hashing some input may seem like it produces some random output, spread rather evenly over a wide range of values. However, it is entirely predictable, and doesn't add any randomness to the value taken as input.
A combination of multiple bad sources of randomness. It may seem like combining two sources of not so good randomness may be a good way to increase the quality of the randomness. However, although combining multiple sources of randomness increases the amount of entropy and makes it harder to predetermine the outcome, it also increases the risk for one entity to control this outcome. This entity only needs to have some control over the value of one of the sources of randomness, to gain the capacity to have some control over the final result.
Remember that an attacker only needs the ability to pick between two possible outcomes, or to predict which one is more likely, to significantly increase their chance of getting an outcome that benefits them.
Best practice
The best practice is to avoid having to rely on a source of randomness, if you can. This avoids issues of reliability of the randomness source (which may stop working in the future), predictability of the outcome, or even control of the outcome by one party, such as a block producer.
If you really need a source of randomness, the two following approaches may be used:
Combine random values provided by every participant. Each potential beneficiary of a random selection could contribute to the randomness: get each participant to pick a random value within a wide range, add all the received values, and use the result as a source of randomness. For this to work, a commit and reveal schemed needs to be used, combined with financial incentives and the use of a timelock cryptographic primitive, to make sure none of the participants may pick between different outcomes, simply by not revealing their value. This is a bit tricky to do well, and for it to be practical, it requires participants to be able to react fast, as the window for each participant to commit their random value has to be very short (a small number of blocks).
Use a good randomness Oracle. It is, in theory, possible to create a good off-chain random Oracle. Chainlink offers a randomness Oracle based on a verifiable random function (VRF), and may be one of the few, if not the only reasonably good available randomness Oracle. However, even assuming it's as good as it advertises, it is not ideal, as it is based on the assumption that there is no collusion between the owner of the smart contract that uses it, and some of the nodes that provide random values. Finally, Chainlink VRF currently is only available on a small number of blockchains, which don't include Mavryk. At the time of writing, there is no good randomness Oracle on Mavryk that we would recommend.
8. Using computations that cause mav overflows
Summary
At the time of writing, mav values are stored as signed 64 bits. Overflows or underflows are not possible on Mavryk, as all the basic operations will generate an error (failure) if the result exceeded the range of acceptable values. However, this doesn't mean you never need to worry about them, as these failures may prevent your contract from working and funds may end up being locked in the contract.
Example of failure
Let's say you use this formula as part of some computation:
Now suppose that and are both 5000 mav. More precisely, 5,000,000,000 Mumav.
is worth Mumav, which is more than the largest value that can be stored in a 64 bits signed value, about Mumav (precisely ).
This computation, if done using the mav type, will therefore fail, even though the final result, 5000 mav, easily fits within a 64 bit signed integer.
The protection against overflows will prevent the transfer of an incorrect value, but will do so by preventing the call from being done entirely, which in itself could cause a major issue, such as locking the contract, with large sums of mav stuck forever.
Best practice
The main recommendation is to be very careful when doing computations with the Mav type, and double-check that any intermediate values may never cause an overflow or underflow.
A good way to avoid such issues is to use int or nat as the type for storing these intermediate computations, as these types can hold arbitrary large values.
9. Contract failures due to rounding issues
Summary
As a contract call will simply fail if it tries to transfer even slightly more than its own balance, it is very important to make sure that when splitting the balance into multiple amounts sent to several people, the total can never exceed the balance, even by a single microtez, due to a rounding issue. More generally, the rounding caused by performing integer divisions can be dangerous if not done carefully.
Example of flaw
The following contract will fail whenever the balance is not a multiple of 4 Mumav:
Distribution | |
Storage | Entry points effects |
|
|
Indeed, let's say the current balance is 101 mumav.
ediv(balance, 4) returns a pair with the result of the integer division of balance by 4, and the remainder of this division.
With a balance of 101 mumav, quarter will be 25 mumav
The amount transferred to userA will be 101 - 25 = 76 mumav The amount transferred to userB will be 101 - 3*25 = 26 mumav
The total amount the contract attempts to transfer is 102 mumav, which is more than the balance. The call will fail.
Best practice
When transferring a portion of the balance of a contract, try to do your computations based on what remains in the balance after the previous transfers:
Here is one way to write the contract in a safer way:
Distribution (fixed) | |
Storage | Entry points effects |
|
|
Whenever you perform divisions, be very careful about the impact that the incurred rounding may cause.
10. Re-entrancy flaws
Summary
A re-entrancy attack was the cause of the infamous DAO hack that took place on Ehtereum in June 2016, and eventually lead to the fork of Ethereum into Ethereum and Ethereum Classic. Mavryk has been designed in a way that makes re-entrancy bugs less likely, but they are still possible. They happen when the attacked contract calls another contract, which may in turn call the attacked contract in a way that breaks assumptions made by its internal logic.
Example of flawed contracts
The two contracts below manage unique tokens identified by IDs. The first contract is a simple ledger that keeps track of who owns each token. The second contract is in charge of purchasing tokens at predefined prices.
Question: can you figure out how to steal funds from them?
Ledger | |
Storage | Entry points effects |
|
|
Purchaser | |
Storage | Entry points effects |
|
|
Example of attack
Here is how a contract could use re-entrancy to steal some funds from the purchaser contract.
Attack | |
Storage | Entry points effects |
|
|
Let's assume that:
- the attacker contract owns the token with ID 42
- the purchaser contract lists a price of 100 mav for it
- someone calls attackContract.attack(purchaserContract, 42)
Here is the succession of steps that will take place:
- attackContract.attack(purchaserContract, 42)
- it creates a call to purchaserContract.offerToken(tokenID)
- purchaserContract.offerToken(42) is executed
- it checks that the caller is the owner
- it creates a transfer of purchasePrices[42].price to attackContract (the caller)
- it creates a call to ledgerContract.changeOwner(42, self)
- attackContract.default() is executed
- it decrements nbCalls to 1
- it creates a call to purchaserContract.offerToken(42)
- 100 mav are transferred from purchaserContract to attackContract
- purchaserContract.offerToken(42) is executed
- it checks that the caller is the owner (it still is)
- it creates a transfer of purchasePrices[42].price to attackContract (the caller)
- it creates a call to ledgerContract.changeOwner(42, self)
- attackContract.default() is executed
- it decrements nbCalls to 0
- it doesn't do anything else
- 100 mav are transferred from purchaserContract to attackContract
- ledgerContract.changeOwner(42, purchaserContract) is executed
- it sets tokens[42].owner to purchaserContract
- ledgerContract.changeOwner(42, purchaserContract) is executed
- it sets tokens[42].owner to purchaserContract
In the end, the attacker contract received 200 mav for a token that was priced at 100 mav, so it stole 100 mav from the purchaser contract. If we had initially set nbCalls to 10, and assuming there were enough funds in the balance of the purchaserContract, 10 calls would have been made, and it would have received 1000 mav for its token, stealing 900 mav.
What makes this flaw possible and hard to detect is that a new call to the purchase contract can be initiated in the middle of the execution of its different steps, interfering with the business logic that otherwise seems very sound:
- send mav to the seller
- take ownership of the token
What really happens is:
- send mav to the seller
- seller does all kinds of things, including trying to sell its token a second time
- take ownership of the token
Best practice
There are two methods to avoid re-entrancy flaws.
1. Order the steps in a safe way.
The idea is to start with the steps that prevent future similar calls.
In our example, the flaw would have been avoided if we simply changed the order of these two instructions:
- create transfer of purchasePrices[tokenID].price to caller
- create call to tokenContract.changeOwner(tokenID, self)
Into:
- create call to tokenContract.changeOwner(tokenID, self)
- create transfer of purchasePrices[tokenID].price to caller
This approach can work, but as contracts become more complex, it can become really hard to make sure all cases are covered.
2. Use a flag to prevent any re-entrancy
This approach is more radical and very safe: put a boolean flag "isRunning" in the storage, which will be set to true while the contract is being used.
The code of the entry point should have this structure:
- check that isRunning is false
- set isRunning to true
- perform all the logic, including creating calls to other contracts
- create a call to an entry point that sets isRunning to false
Here is the new version of the contract, using both fixes:
Purchaser contract | |
Storage | Entry points effects |
|
|
11. Unsafe use of oracles
Summary
Oracles provide a way for smart contracts to obtain information about the rest of the world, such as exchange rates, outcomes of games or elections. As they usually rely on services that are hosted off-chain, they don't benefit from the same safety measures and trustless features provided by the blockchain. Using them comes with its own flaws, from using oracles that provide information that can be manipulated by single entities or a small number of colluding entities, to oracles that simply stop working and provide obsolete information. Some types of oracles, such as online price oracles, may be manipulated by contracts to provide incorrect information. Every time an oracle returns inaccurate information, it creates an opportunity for attackers to take advantage of the situation and steal funds.
Reminder about oracles
A typical oracle is composed of two parts:
- an off-chain service that collects information from one or more sources
- an oracle smart contract, which receives this information, as well as requests from other contracts (in the case of on-demand oracles)
The off-chain service tracks the requests made to the smart contract, fetches the information, and calls the oracle contract with this information, so that it can store it and provide it to other contracts, usually for a fee.
Danger 1: using a centralized oracle
If the off-chain service is controlled by a single entity that just sends the requested information without any way to verify its origin and validity, anyone who uses this oracle is at risk for multiple reasons:
- reliability issue: if that single entity simply stops providing that service, every contract that relies on it simply stops working.
- accuracy issue: the single entity may suddenly decide to provide false information, causing all contracts that rely on this information to make bad decisions. The entity may then take advantage of these bad decisions.
Good decentralized oracles include systems that prevent single entities from stopping the oracle or manipulating the information it sends.
Best pracice: only use oracles that are decentralized, in such a way that no single entity, or even no small group of colluding entities, may stop the oracle from working, or provide manipulated information.
Danger 2: not checking the freshness of information
Oracles often provide information that may change over time, such as the exchange rate between two currencies. Information that is perfectly valid at one point, become obsolete and incorrect just a few minutes later.
Good oracles always attach a timestamp to the information they provide. If for some technical reason, the off-chain part of the oracle is unable to send updated information to the oracle smart contract, then the stored information may get old. This could be caused by network congestion, or blocks getting full and bakers not including the oracle's transactions.
Best practice: make sure your contract always checks that the timestamp attached to information provided by oracles is recent.
Danger 3: using on-chain oracles that can be manipulated
On-chain oracles don't provide data from off-chain sources. Instead, they provide access to data collected from other smart contracts. For example, an on-chain oracle could collect and provide data about the exchange rate between two tokens from one or more DEXes (Decentralized EXchanges) running on the same blockchain. Doing this safely is not as simple as it may seem.
Example of attack: an attacking contract could perform the following steps:
- use a flash-loan to borrow a lot of mav
- buy a large number of tokens from one DEX, which temporarily increases the price of this token in this DEX
- call a contract that makes bad decisions based on this manipulated price, obtained through an unprotected Oracle
- profit from these bad decisions
- sell the tokens back to the DEX
- pay the flash-loan back, with interest
In some cases,the current price could simply be a fluke, not caused by a malicious intent, but simply a legitimate large transaction. The price may not be representing the current real value.
Good online oracles never simply return the current value obtained from a single DEX. Instead, they use recently but past historical values, get rid of outliers and use the median of the remaining values. When possible, they combine data from multiple DEXes.
Best practice: if you need to make decisions based on the price of tokens from a DEX, make sure you always get the prices through a good online oracle that uses this type of measure.
12. Forgetting to add an entry point to extract funds
Summary
Being the author of a contract, and having deployed the contract yourself, doesn't automatically give you any specific rights about that contract. In particular, it doesn't give you any rights to extract funds from the balance of your own contract. All the profits earned by your contract may be locked forever if you don't plan for any way to collect them.
Example of flawed contract
Here is a flash loan contract. Can you find the flaw?
Flash loan contract | |
Storage | Entry points effects |
|
|
This flash loan contract may accumulate interests for years, with the owner happily watching the balance increase on a regular basis... One day, as this owner decides to retire, they will realize that they have no way to withdraw not only the profits, but also the initial funds.
Best practice
Always verify that you have some way to extract the benefits earned by your smart contract. Ideally, make sure you do so using a multi-sig contract, so that you have a backup system in case you lose access to your private keys.
Warning: this may seem very obvious, but note that this very unfortunate situation happens more often than you may think.
More generally, when you test your contract, make sure you test the whole life cycle of the contract, including what should happen at the end of its life.
13. Calling upgradable contracts
Summary
On Mavryk, contracts are not upgradable by default: once a contract has been deployed, there is no way to modify its code, for example to fix a bug. There are, however, several ways to make them upgradable. Doing so provides extra security for the author of the contract who has a chance to fix any mistakes, but may cause very significant risks for any user who relies on the contract.
Reminder: ways to upgrade a contract
There are two main ways to make a contract upgradable:
- put part of the logic in a piece of code (a lambda), in the storage of the contract
- put part of the logic in a separate contract, whose address is in the storage of the main contract
In either case, provide an entry point that the admin may call, to change these values, and therefore change the behavior of the contract.
Example of attack
Imagine that you write a contract, that relies on an upgradable DEX contract. You have carefully checked the code of that contract, that many other users have used before. The contract is upgradable, and you feel safe because that means the author may be able to fix any bugs they may notice.
Then one day, all the funds disappear from your contract. As often, your contract used the DEX to exchange your tokens for a different type of tokens, but somehow, you never received the new tokens.
You then realize that the owner of the DEX has gone rogue, and decided to upgrade its contract, in a way that the DEX collects tokens but never sends any back in exchange.
Best practice
Before you use a contract, directly or as part of your own contract, make sure this contract can't be upgraded in a way that breaks the key aspects that you rely on.
If the contract you want to use is upgradable, make sure the upgrade system follows a very safe process, where the new version is known well in advance, and the decision to activate the upgrade is done by a large enough set of independent users.
14. Misunderstanding the API of a contract
Summary
There are many contracts that provide a similar, common service: DEXes, Oracles, Escrows, Marketplaces, Tokens, Auctions, DAOs, etc. As you get familiar with these different types of contracts, you start automatically making assumptions about how they behave. This may lead you to take shortcuts when interacting with a new contract, read the documentation and the contract a bit too fast, and miss a key difference between this contract and the similar ones you have used in the past. This can have very unfortunate consequences.
Best practice
Never make any assumptions about a contract you need to use, based on your previous experience with similar contracts. Always check their documentation and code very carefully, before you use it.