Question in short: If I consider just the first 16 bytes of SHA-1 and SHA-256 hashes, do they have substantially the same collision risk?
Background:
I have an application where I need a 16-byte (exactly) hash of a short string (a few bytes to tens of bytes).
My plan is to use SHA-1 and simply truncate to 16 bytes. A colleague will say "Don't use SHA-1 -- it's BROKEN!"
My belief is that for SHA-1 and SHA-256 (for example), the first 16 bytes will have very similar collision probabilities.
(Furthermore, since the input message is short, any collision you can find would probably NOT be of a similar length. Is that true? I don't care if an attacker can find a 200 byte message that gives a hash collision.)
This question addresses the actual collision probability for the first N bytes for MD5 in particular, making the rather strong assumption that the hashes would be uniformly distributed in the first N bytes. If that is a good assumption, wouldn't it be true that any well-designed hash algorithm would have the same collision risk when considering just the first N bytes? (Obviously, I am quite capable of designing a very bad hash algorithm that gives lots of collisions in any number of bytes. But I'm considering only well-known and well-tested hash algorithms.)
My claim: If I'm going to use only the first 16 bytes, it doesn't matter if I choose SHA-1, SHA-256, or SHA-OneZillion.
(This does seem like a question that would have been asked a hundred times, but I could not find it. I apologize if I missed it, and will be grateful if someone points me to previous postings.)