Making the injected payload "not executable" is what Data Execution Prevention is about. There are various techniques to achieve that, depending on what the underlying hardware can do. On most architectures, this will be done through the MMU: pages that are supposed to contain "data" (e.g. the stack) are marked as non-executable.
Old x86 CPU makes DEP a bit challenging because the original MMU from the 80386 does not distinguish between "read" and "execute" accesses; thus, it does not allow memory to be marked as unexecutable while still being readable. DEP can still be done to some extent with the help of segment registers, albeit with less flexibility (basically you can make the whole stack non-executable, but it is hard or impossible to repurpose memory chunks dynamically). Another method called PaX allows for separating "read" and "execute" access rights on a per-page basis, but this requires some juggling with the TLB, and has a runtime overhead (TLB misses trigger CPU exceptions). See this answer for details.
Newer x86, and in particular all x86 CPU that can run in 64-bit mode, have a MMU that natively knows how to distinguish between "read" and "execute" (this is called the NX bit) making these segment or PaX games obsolete.
The so-called "W^X policy" (read it as "Writeable exclusive-or eXecutable") states that the OS should never let a piece of memory to be both writeable and executable at the same time, so if the payload could be injected (i.e. written to a chunk of RAM) then it cannot be executed until some explicit access rights change is performed on the page (and, presumably, the target code has no reason to do such a change).
DEP is not a panacea; attackers have now learned to use existing pieces of code, already in RAM and marked executable, to serve as payload. Lookup Return-Oriented Programming for details.
Not allowing point 3 would be a more comprehensive method to prevent successful attacks; unfortunately, some widespread traditional programming languages (C and C++, namely) are poor at that task. Control-flow diversion occurs as a consequence of an uncontrolled memory access (buffer overflow, use-after-free, double-free...) that the language allowed to happen because it does not check for such occurrences. The developer is supposed to add all the necessary checks. It so happens that even the best developers with the most thorough development practices still occasionally fail to do so.