16

In IT Security and computer power users there seems to be an excessive amount of distrust. They don't do anything or use anything because of this distrust, or use what seems like an excessive amount of protection.

Note: I am writing this from the point of view of a person living in the US. The following assumptions obviously wouldn't make sense in some oppressive governments

  • Distrust of X because of MITM or general interception - Exactly how often does a MITM happen? It seems that every time I hear "Encryption, Encryption, Encryption!" its coming from a person who expects that there's a MITM attack happening everywhere; at this very moment there's a person sitting outside of their house with a wire hooked up somewhere eagerly looking at Wireshark. Lets be serious, this doesn't happen often (eg 90% of the population)
  • Distrust of Service X or Program Y because you haven't verified its source - I've seen this before with everything from http://lastpass.com and http://random.org to http://gmail.com. This I think is crazy because you looking at the code does NOT guarantee that its clean. All it takes is one innocent looking line to do damage. And your supposed to find this line in 5,000, 10,000, 20,000 lines of code? Be serious, your most likely not going to find it and therefor going to be using it with a false sense of security which is arguably worse than using it with a bit of distrust.
  • Distrust of public, shared, or friends network because of risk of snooping - This I have to argue with because most people don't have a packet logger or other ways of interception traffic. I'm sure 99% of networks out there just don't care about these kinds of things; they're more worried about routing and firewalling.
  • Distrust of protocol X because it sends password in the clear - This is what really made me ask this question: People kept blaming FTP sending passwords in the clear as to how the account got compromised. While in that situation it made sense, it seems this excuse is thrown out every time there's a compromise when something else is really the issue (eg password is on a sticky note on the monitor). This goes back to earlier, rarely (if at all) is there a packet sniffer or other forms of snooping on your network, your isp's network, your isp's isp's network, etc.
  • Distrust of anything because "How do you know its not compromised?" - How do you know there isn't a Nuclear Bomb under your house? What, going to dig up everything just to be sure? How do you know you won't get mugged on the way to work? Going to have a personal bodyguard? Also, how many times does this happen? Is there a nuclear bomb found under someone's house every day? Do you get mugged when crossing the street? Essentially I'm saying that while you don't know, you don't know a lot already and even thin, the risk of it happening is slim.

Is distrust of everything in these cases and others really necessary? Is the level of paranoia warranted? And why would people act this way?

Ladadadada
  • 5,203
  • 1
  • 26
  • 42
TheLQ
  • 1,239
  • 1
  • 12
  • 21
  • 1
    Without more context, this question doesn't have a clearly "right" answer. This is also in some ways a duplicate of another question with a clearer goal which is less argumentative: [How do you manage security-related OCD (i.e. paranoia)?](http://security.stackexchange.com/questions/3339/how-do-you-manage-security-related-ocd-i-e-paranoia). Can you edit it to ask a question that hasn't been asked yet and meets the [faq]? – nealmcb Sep 21 '11 at 22:17
  • 5
    In other news, there is no need for insurance; after all, when was the last time you had a fire in your house? See, it's just a 0.5% probability, that's as good as 0%, right? No worries.`` In other words, Bad Things with low probability tend to have a rather unpleasant, high-profile impact *when* they actually happen. – Piskvor left the building Sep 22 '11 at 11:20
  • 4
    note that often what happens is you are not targeted directly, but you just found yourself in the way of a larger automated process you have no control over. the question is do you wish to sit idly until it hits you, or do you try to prevent it because the question is not "will it happen" but rather "when will it happen". – tkit Sep 22 '11 at 12:47
  • 1
    @pootzko: Amen to that - 99.99% of all attacks are of the random, scripted "throw it against the server and see what sticks" variety. (Which is not to say that targetted attacks are unlikely - just that there's a humongous volume of 24/7 blind, automated attacks) – Piskvor left the building Sep 23 '11 at 12:00
  • At least in the case of random.org there is no reason for using it over a secure local PRNG, so why would we use something we can't verify, if a better verifiable alternative is available? – CodesInChaos Aug 15 '12 at 18:17

12 Answers12

26

Paranoia, professional skepticism, risk management... sometimes these concept are hard to separate. The odds that somebody is reading my packets right at this moment are relatively low. The odds that somebody has sniffed my internet traffic at some point in the past year... I guarantee it has happened, I've been to DEFCON.

The advent of wireless networking has made MITM attacks more common. Wired networks are not safe and MAC spoofing simple. While it's not highly likely in your home, it is when you're out in the world. Coffee shops, airplanes ("Thanks for paying for 24 hours of access, I borrowed your MAC so I can use it and oh-that's-your-Facebook-account?"), hotel rooms...

Perhaps you stay at home... but you may find that your router has a buffer overflow and is now running malware that sniffs for passwords. There's a court case here in town about a small construction company that lost hundreds of thousands of dollars to wire transfers initiated as a result of packet sniffing.

Even if using telnet for root login is safe 99% of the time, that 1% can have a very high cost, and that's what we're protecting against. There have been some fantastic hacks and MITM work on international scales (DigiNotar hacks, GMail / China, YouTube routed through Pakistan), but there have been many small-scale local ones as well.

That risk is weighed against your needs -- some people really do have to be concerned about the internal design of CPU chips. People are bad at measuring uncommon risk. Ask somebody who doesn't ride how safe motorcycles are, then ask a rider. Ask them both what percentage of motorcycle crashes result in serious injury. I bet you'll get some really different numbers. Consider also the relative cost of safety features in an automobile as compared to the cost for its primary function of moving you around, both in monetary cost and added weight. Firewalls (the literal "stops fire from moving kind), roll cages, seatbelts, airbags that cost $1,000 when deploying in a fender-bender... very few cars are in rollover accidents or burst into flames; do we really need to be that paranoid?

I've seen Google hacked, MITM attacks over cable modems, unauthorized malware performing packet captures, bank accounts cleared out over passwords sniffed, and attempts to put malware into the Linux kernel (automatic root access would have been amazing if that had sat for a year). I've seen random number generation be predictably bad (PHP, Netscape, Debian OpenSSL package) such that the resulting cryptographic authentication or decryption can be performed remotely in little time. I've seen newspapers and governments compromise cell phones of officials in important positions.

On a small scale, you're probably OK. On a big-company-wide scale, you're going to face these risks and you will see your peers lose to them. Paranoia and the security industry are bred because real financial loss happens when people make mistakes that affect big systems.

Jeff Ferland
  • 38,170
  • 9
  • 94
  • 172
  • 5
    Good answer. As well as the low-probability, large-scale losses, there's also an economic effect similar to herd immunity at work. If everyone blindly trusted everything, there would be many more criminals exploiting that trust. – user502 Sep 20 '11 at 18:09
  • A good example to add to yours is when 15% of the global internet traffic was rerouted to/through China: http://www.renesys.com/blog/2010/11/chinas-18-minute-mystery.shtml – Shadok Nov 17 '11 at 14:32
  • I think YouTube being routed through Pakistan was BGP hijacking, not MITM. – guest Nov 19 '17 at 06:43
12

This is such an astonishing question that I find it difficult to answer.

This distrust doesn't come from theory, but from experience. When we say "FTP is too dangerous to use" is because we've found it, in practice, too dangerous to use.

You seem to think that a "packet logger" is either difficult to obtain or difficult to use. Neither is true. It takes seconds to download and install and start collecting passwords or sessions within seconds.

The state of security is that I can walk into the local Starbucks and break into accounts within seconds, without using anything more complex than something like Firesheep.

Robert David Graham
  • 3,893
  • 1
  • 15
  • 14
  • 1
    Ease of use isn't the problem, its the probability that it would happen. – TheLQ Sep 21 '11 at 21:23
  • 4
    @TheLQ - the problem is that it is successfully happening every day, and making a lot of 'bad guys' very rich with victims' money - in internet cafes, online etc - staggeringly widespread. From the side of the security professional we see it happen **every** day. Some environments are safer than others, but if you don't take precautions, **you** are the low hanging fruit and **will** be plucked at some point...because attack is so easy, and targeting is so automatable. – Rory Alsop Sep 23 '11 at 11:05
10

Short answer: Yes, computers can get a whole lot wrong before any human can realize that there's a problem so "trust" just doesn't work in computer systems.

Default distrust is the only viable posture for high-performance system. To explain why, I'll contrast interactions between computers to interactions between humans.

In human interactions, default trust is worthwhile for little things. If a stranger comes up to you and asks you for directions, you'll probably stop and spend some time helping them. If they ask you your mother's maiden name, you'd probably think a bit. If they ask you for a list of everyone you've ever met, you'd probably not answer. If a thousand strangers came up in series, all wearing suspicious disguises each asking about one person you would stop answering pretty quickly. You're able to do this because you grow more suspicious/bored with time, and because you're able to use human judgement to recognize patterns.

Computers are valuable precisely because they do repetitive, mundane tasks quickly. Because they can work so quickly, an attacker who can get a computer to do a lot of little things in series, can do far more damage than if they were interacting with a human who would just get bored and/or annoyed.

One solution to this is to rate-limit responses or use quotas to prevent repetitive abuse. This approach is in direct conflict with computers being "valuable because they do repetitive, mundane tasks quickly" so cannot be used in general in high-performance systems.

The only posture that works for high-performance systems is to trust those who have authority to request the desired work, and not trust those who don't. Trusting based on authorization only works when components don't make their own trust decisions -- an executive who has a stake in not disclosing company secrets with a credulous secretary is still a risk.

Good software architects are suspicious of their dependencies. Computer programs are developed from a medley of libraries, protocols, and hardware systems. Most developers cobble together systems this way in part because they do not understand the whole problem the pieces are trying to solve -- engineers need to specialize to build complex systems. An attacker often only needs to understand the flaws in a system, because, due to violations of POLA, an attacker can often compromise the whole system by compromising just a few flawed parts. Widespread lack of effective fire-walling leads good software architects to only build on components with a proven track record.

Some systems, like qmail, effectively firewall components to better approximate POLA. If you architect your system this way, then you can be more liberal in what you depend upon.

Mike Samuel
  • 3,873
  • 18
  • 25
7

I can tell you why I act this way from personal experience. Here is what I have experienced in my life:

  1. A friend of mine didn't trust her boyfriend, and installed a snooper on her network. It was some commercial product she just installed on her windows laptop. When he came to her house with his laptop, she was able to get all his plaintext logins. When she couldn't get his encrypted passwords, she bought a commercial keylogger and installed it on his machine, with a nice feature that it uploaded his passwords to a webpage. She had no technical skill (Literally, close to zero), and kept those passwords for years to snoop on him.
  2. I was in a CS security course in college, and a subgroup of people in that course used to compete to see who could sniff/crack more passwords around campus and town in a given week. Sometimes they would get hundreds. I don't know if they ever did anything with them or not.
  3. There are plenty of well publicized tools like firesheep which non-hackers use just for fun. What are the chances someone might be using one of these when you are doing something private?
  4. There are companies around the world, including the US, which make money off of snooping your traffic - see http://www.information-age.com/channels/comms-and-networking/news/1306138/isp-traffic-snooping-growing-fast.thtml and google paxfire (ISP's redirecting search for profit) - http://www.broadbandreports.com/shownews/115610

I don't see the risks as trivial and rare, I see them as pervasive.

Charlie B
  • 354
  • 1
  • 4
5

Your question is similar to asking why should I lock my doors but with the twist that all cyber doors get jiggled by would-be intruders constantly.

An unprotected system placed on the Internet will undergo simple automated port scan attacks within minutes of being booted (http://www.securityweek.com/informal-internet-threat-evaluation-internet-really-bad). Those attacks are being launched from systems which were likewise unprotected and now work for the attacker in addition to the legitimate owner. If you don't mind hosting malware intent on attacking even more systems then the answer is simple. If you would like to prevent that then you need to review the issues in your list for an effective counter to that threat.

Just as with locking the front door the decisions here are related to the threat environment. You likely can’t justify a steel door but that would be appropriate for the bank, school, and police station. So the answer is “it depends.” None of the potential vulnerabilities in your list are fictional. As you suggest, they may not present a likely threat to you. Additionally, with any risk analysis the cost of the loss is also a critical factor. In your case that may also be an insignificant consideration. For many of us security professionals on this forum the items we assist in securing are probably more valuable and larger targets than the systems you are personally considering.

So if you look at the threats and mitigations you list as knowledge and tools then the understanding and use of same is important to many, especially those on this forum, but perhaps not to you. Context is the factor missing in your question, thus “It depends”.

zedman9991
  • 3,377
  • 15
  • 22
5

Does human misery and heaps of money sound like good reasons? A hundred years ago any living person interacted with maybe a few thousands of people in the course of a lifetime, most of those for a very short time. Now consider that your computer (depending on your and your ISP's setup) is reachable by billions of other machines all the time. If any one of those is used to target your machine, the possible fallout could be anything from a bit of (or a lot of) bandwidth lost, via seeing everything personal you ever type, sending spam and viruses, identity theft and cleaning out your bank account, to planting false evidence and alerting the authorities.

Being connected to the Internet without proper security is like sharing flats with the most evil and intelligent person in the world, he's got a grudge against you and the rest of the world, and he's got absolutely no scruples.

l0b0
  • 3,011
  • 21
  • 29
5

"IRL" criminal attempts are limited by the resources available (how many bank robbers / thieves / muggers / fraudsters can you get working for you at once; how quickly can you identify and focus on potential victims) and by fear of detectability/traceability (how can you avoid being spotted in the attempt and recognised; how do you avoid being informed on).

Electronic criminals can use botnets to attack millions of potential victims at once, with minimal chance of detection and even less chance of follow-up or being traced and caught, and without needing to involve a large number of co-conspirators (& potential informants) in the execution of the plan.

Risk assessments made based on intuition about physical scenarios don't often extrapolate well to computerised crime.

Even the advice to at least avoid being the low-hanging fruit ("I don't need to run faster than the bear, just faster than you") doesn't fully apply -- botnet resources won't be left idle once once victim is found (password cracked or whatever). To abuse the bear analogy, there are actually 100 bears after us; I need to outrun both you and the 99 bears that will continue chasing me while the other one chows down on your leg.

Misha
  • 2,739
  • 2
  • 20
  • 17
4

Is the level of paranoia warranted? And why would people act this way?

Frequently, it is caused by an over-reaction to a previous event. Humans are terrible with statistics (which is why casinos and insurance companies get fabulously wealthy), so we tend to over-emphasize past anecdotal events.

In companies, this gets embedded into policies and standards.

Some distrust is required by regulations, because bad stuff has happened in the past. Those also get added to the sort of paranoia that makes up IT security

I currently work at one of the national labs. This national lab involves solar panels and windmills, but like all the national labs is part of the Department of Energy. Some of the other labs make nuclear weapons, so there is a large institutionalized paranoia - because spying and bad stuff has happened in the past and will happen in the future.

Tangurena
  • 451
  • 2
  • 9
4

Absolutely - these happen a lot, because there is big money to be made, and that drives crime. To give you some specifics on your points:

Exactly how often does a MITM happen? ... Lets be serious, this doesn't happen often (eg 90% of the population)

It happens a lot - but to be fair, it is often easier to attack using a trojan (Man in the Browser or keylogger)

Distrust of Service X or Program Y because you haven't verified its source

Unverified code often has interesting extras. Even reputable code is occasionally found to be compromised - sometimes apps, sometimes entire operating systems. Most people can't check for this, so you have to decide where to place your trust. Usually greater trust is placed in those sources which have strong governance, or those with a wide population of verifiers.

Distrust of public, shared, or friends network because of risk of snooping - This I have to argue with because most people don't have a packet logger or other ways of interception traffic. I'm sure 99% of networks out there just don't care about these kinds of things; they're more worried about routing and firewalling.

You should distrust all public networks - even by your definition, the 'good' ones don't care about these things, so won't spot 'bad guys'. In some countries, though, shared networks are compromised more often than not, sometimes with the owner's collusion. Either way - attacks will first happen against those who aren't protected, because it is so easy to do so.

Distrust of protocol X because it sends password in the clear...This goes back to earlier, rarely (if at all) is there a packet sniffer or other forms of snooping on your network, your isp's network, your isp's isp's network, etc.

This I would seriously disagree with. Almost every network I work with has sniffers on it, from banking to ISPs to government. The majority are supposed to be there (some aren't) and they log network traffic. Do you want your username and password for internet banking to be logged somewhere and then have these logs pop up on Wikileaks (has happened a lot) or on a hard disk on a rubbish tip (also happens) or be left in a taxi (again - it happens)?

Distrust of anything because "How do you know its not compromised?" - How do you know there isn't a Nuclear Bomb under your house? ...Essentially I'm saying that while you don't know, you don't know a lot already and even thin, the risk of it happening is slim.

(tl;dr) You are correct in one way - it is about assessing and managing risks. However the bit you need to think about is that this industry - Information Security - is probably a lot better at assessing the risk in this case, and the risk is not slim by any means. It is actually quite high - so much so that global banks budget for billions in fraud through these sort of mechanisms!

Rory Alsop
  • 61,474
  • 12
  • 117
  • 321
3

I think one of the biggest problems is the lack of separation between data and instructions. Relating to Mike Samuel's post, humans talking to strangers is fairly safe. Only in very rare instances to people get brainwashed through conversation -- it is very hard to mess with our instructions. Computers are made to be reprogrammed, and are so flexible it is very hard to verify the integrity of their own programs. This leads to a very high level of protection / paranoia towards things that might lead to reprogramming the OS with malware.

Bradley Kreider
  • 6,182
  • 2
  • 24
  • 36
  • 1
    "Only in very rare instances to people get brainwashed through conversation" - Nope. Social engineering is the most powerful tool of them all. Pentester: "[I give you chocolate](http://www.schneier.com/blog/archives/2008/04/giving_up_passw.html), you give me your password; deal?" User:"Deal." Another example? Scammer: "Click on this link to see dancing bunnies; oh, and we need your complete personal info" User: "Sure, here you go, ME WANT DANCING BUNNIES". People are actually the *weakest* part of the system, and trivial to misdirect ("reprogram"). – Piskvor left the building Sep 22 '11 at 11:15
  • @Paskvor - The key distinction here (though some may argue this point, with good reason, as well) is that humans can be *taught* how to avoid falling victim to Social Engineering attacks and they are fully capable of withstanding those over time if they heed such lessons. Computers can also be "taught" to resist attacks, but attackers are constantly finding new ways to "re-teach" the computers so that they can still fall under hostile control. – Iszi Sep 22 '11 at 16:41
  • 1
    @Piskvor: Social engineering is *NOT* reprogramming. Having an abusive boyfriend that isolates you, cuts you off from your friends, and "reprograms" you to believe you deserve the abuse is the example. Social engineering does not change the underlying instruction set. – Bradley Kreider Sep 23 '11 at 17:27
  • @rox0r: That's just haggling over semantics - dynamic programming and self-modifying code doesn't change the underlying instruction set either :) – Piskvor left the building Sep 24 '11 at 08:25
1

Yes, it is necessary. To give just one recent example, take a look at the recent Diginotar mess. There is evidence that hundreds of thousands of Iranian users were being successfully attacked by a MITM attack (presumably by the Iranian government) -- and they had no idea.

D.W.
  • 98,860
  • 33
  • 271
  • 588
1

I think the problem is best expressed by a $500 withdrawal that appeared on my bank balance some months ago. I saw my online balance and immediatly called my bank. I was thinking through the scenarios of what could have happened while I waited on hold. I took out my wallet and compared the contents to my paper copy.

Yes, twice a year I take all the cards out of my wallet, lay them on a copy machine and make a copy.

The withdrawal was made on Saturday and did not appear until late Monday. I gave the bank representative answers to his questions and was told my claim would be investigated, until then I was out $500. I asked why and was told "We don't give out that information".

This is the first problem. The good guys don't work together. The bad guys do. In fact they advertise (see DATA BREACHES: WHAT THE UNDERGROUND WORLD OF “CARDING” REVEALS ) The good guys work under NDA (Non-disclosure Agreements in addition to client restrictions and corporate policies.

I asked for his manager and my $500 was credited back that evening. I took out my reciepts and looked for every place I had used the offending card in the past two weeks. I had used the card only one place: a gas station. I went back to the gas station and checked the pumps and card readers. Eveything looked fine. I asked the gas station attendent who had the key if the pumps needed fixing. He had it. I asked him who else had access. He shrugged.

This is second problem. Technical solutions that give the feeling of protection, but the system fails to operate as planned. It turned out that a dozen other customers of that gas station had issues with their statements.

It is distrust only if the system, the control, or the person is potentially worthy of trust. If they are not worthy of trust then it is appropriate skepticism.

The systems we use are weak. They were not designed with security in mind. Security is only added as the risk increases.

The Black Hats are sophisticated and skilled. Modern technology gives them the ability to attack targets across oceans.

What I practice is not mistrust; it is caution. A caution warranted by an understanding of the threats and awareness of the vulnerability of systems.

this.josh
  • 8,843
  • 2
  • 29
  • 51