6

I am wondering about source code auditing and how hard it would be to fake a build to be audited? Let me explain.

Say I would be a dishonest programmer that would wish to put in some backdoor into the system I would be selling or what have you. I need to get some certifications to make myself look more legitimate, so I decide to undergo source code audit. However, knowing well that my fraud would be detected, I create a spotless version of the code that does not have the backdoor which I would be submitting to the review. Passing that with flying colours, all I need to do now is scrap the fake build and replace it with my own, malicious version that I would proceed to distribute. If someone would get a hold of it, how would they know the code was tampered with?

How would source code audits help to detect a situation like this? How easy would it be to detect?

ThePiachu
  • 365
  • 3
  • 8

4 Answers4

5

It all depends on the scope of the audit.

If the audit is purely of the code (ie give the auditor a copy of the code, they audit it and return a report) then it could be very easy to tamper with it after the fact.

If the audit is a bit more comprehensive, and includes the code development, test and promotion to live environment procedures, then you should be able to rule out most opportunities for tampering to take place.

If you additionally audit the ongoing change management process, and carry out regular confirmation checks on live code then you give yourself an even greater level of assurance that the code in live is the code developed and tested, and that it is appropriate for purpose.

This is why limited scope audits can be essentially useless, aside from getting a tick in the box (which may be useful...)

Rory Alsop
  • 61,474
  • 12
  • 117
  • 321
  • +1 on this. A comprehensive audit will include white-box (source code) analysis on the main branch, and black-box (static binary) analysis on the released executables. A combination of both provides pretty decent security coverage. – Polynomial Feb 24 '13 at 21:26
  • Even the most comprehensive (read: expensive) audits around today won't protect you from a malicious developer intent on hiding a backdoor in their code. Did the auditor *really* check that this month's version of gcc in the build system is actually built from the identical source as published on gnu.org? – Michael Feb 25 '13 at 22:18
  • That's my point - where are your scope boundaries? You need to have change management coverage of this if you want a full audit. – Rory Alsop Feb 25 '13 at 23:18
3

It all depends on where your trust boundary is. I would recommend reading Reflections on Trusting Trust by Ken Thompson [1].

Hopefully the auditors provide a signature or hash of the file or source being audited; this way you can at least verify that you're actually running the same code that was reviewed.

However, even if you don't trust the auditors and you compile it yourself from source that you know is good (we'll pretend you're really good at spotting threats in code that's probably designed to hide them), you still have to trust your compiler. Or, in the case that you're downloading a version compiled by the auditors, their compiler.

While this is probably safe to do, it's good to recognize that compiling it yourself isn't fool proof. You can never fully trust anything; you have to set a trust boundary, and decide at what point you want to accept something as safe.

Gilles 'SO- stop being evil'
  • 51,415
  • 13
  • 121
  • 180
Sam Whited
  • 968
  • 5
  • 16
2

That's why a lot of people use GIT and Jenkings. The idea is that all code comes into a centralized repository. No single person has complete access to the complete build, when a programmer submits a piece of code it gets checked by another programmer before being pushed up the branch. One programmer can not simply change code or compile at will.

Normally a source code audit is done right before compiling, the auditor will see check the code right before it gets compiled and no one else can make alterations without these being approved by the auditor. Also note that the auditor cannot introduce code himself. This means you can't just submit anything.

Unfortunately this is the ideal situations and it's not done everywhere. One of the basic concepts of change management is to have 3 environments:

  • Development
  • Quality Assurance
  • Production

In development all programmers get access, in QA only a few (least as possible) and in production not a single programmer should have access. This all builds a reasonable amount of assurance, but there still might be tampering. In the end you try to cover the risk as best as possible, but nothing is foolproof.

Lucas Kauffman
  • 54,229
  • 17
  • 113
  • 196
  • My question was mainly about a single malicious individual with full access to the source code that stores their private version on their own drive. That individual would have full control over the code as well as what built the customers would get. This would be a situation with small productions / one's own homebrew products. – ThePiachu Feb 24 '13 at 23:07
2

I would say it's basically impossible for a customer to trust that a build corresponds to an audited source unless he built it himself. Even that is no guarantee against the presence of back doors or other undesirable features, a back door is the same as a bug - you can never prove that there's none.

I often wonder how this works in situations where complete trust is problematic. When the US sells F-16's to Pakistan, how do the Pakistanis know that there's no back door in the avionics that will make them blind to stealth helicopters.. Oops.

ddyer
  • 1,984
  • 1
  • 12
  • 20