Answer depends on your security model.
Classically, a cryptographic hash function has three properties:
- It resists preimages: given y, it is infeasible to find x such that h(x) = y.
- It resists second preimages: given x, it is infeasible to find x' such that x ≠ x' and h(x) = h(x').
- It resists collisions: it is infeasible to find x and x' such that x ≠ x' and h(x) = h(x').
For a "perfect" hash function with an output of n bits, the resistance is up to effort, respectively, 2n, 2n and 2n/2 (regardless of how strong the function is, "luck" still works with a small probability, and that gives these average costs for finding a preimage, a second preimage, or a collision).
When you truncate the output of an existing hash function, you are in fact defining a new hash function, and since that hash function has a smaller output, its resistance is correspondingly smaller.
The good question is then: which of these properties are you relying upon ? This depends a lot on the context.
For instance, suppose that the attacker's goal is to alter an existing piece of data, without being detected through a hash function mismatch. This is a second preimage situation: attacker sees a message m and tries to find a modified message m' that hashes to the same value. In that case, truncating to 10 bytes means that you still have resistance 280, a huge number that will deter all but the most determined or irrational attackers. Note, though, that an attacker may have a choice of targets: if the attacker sees 100 messages, and wants to modify one of them (with no preference over which can be modified), then "luck" works 100 times better. Thus you may want some extra security margin and keep, say, 12 or 13 bytes of output.
Now if the attacker can get to inject his own innocent-looking message m, that you validate and hash, and will later on replace it with a distinct message m' with the same hash value, then this is a question of collisions, thus working with resistance 2n/2 for a n-bit output. In that case, truncating the hash function output is a quite bad idea.
The safe course is not to truncate anything. After all, as I write above, truncating the output is equivalent to designing your own hash function, and, in all generality, this is very hard to do properly. SHA-1 has a 160-bit output precisely so that you get "at least 280 security" in all cases. Determining whether collisions apply to your situation or not can be hard.
Also, remember that a hash value does not create integrity; it just concentrates the issue. When you hash a piece of data, you get a hash value, and that hash value will guarantee that the original data is unchanged only insofar as you can be sure, through some other means, that the hash value itself was not altered. If you store the hash value along with the data that is hashed (your "input"), and the attacker is assumed to be able to alter the stored data, then what exactly would prevent him from modifying the hash value as well ? Whenever he wants to put m' instead of m in some database cell, he may also put SHA-1(m') into the neighbouring cell, overwriting the SHA-1(m) that was there, and you will be none the wiser.
If you want to use hash values to protect data integrity, then you must ensure the integrity of the hash values themselves in some other way. Hash functions reduce the problem: instead of protected a 1-gigabyte file, you "just" need to protect a 20-byte hash value. But you still have to do something.
Cryptography can help you further concentrate things by using a MAC instead of a hash function. With a MAC, you have a secret key, and you can use one key for millions of MAC computed over millions of "inputs". There again, you still have to do something, but that something is: keep confidentiality of a single key (of, say, 128 bits) for the whole of the server.