From a mathematical point of view: when you cascade two encryption functions f and g in that order (you encrypt with f, then encrypt the output of f with g), then g cannot make the data less confidential than with f alone, provided that an important condition is fulfilled: the secret key used for g must be unrelated to the secret key used for f.
It is not that easy to make a formal and correct definition of what "unrelated" means here. Basically, suppose that f is applied by Alice, who then hands the encrypted data to Bob, who applies g; decryption would go in the reverse direction (Bob, then Alice). Alice knows the key for f, Bob knows the key for g. By definition, Bob only works on what Alice produced, and Alice uses a key that Bob does not know. Thus, regardless of who much incompetent Bob is, he cannot unravel the encryption done by Alice: if Bob could through using a really poor algorithm, then any attacker could do as well, and that would mean that Bob's incompetence would count as a break on f (the encryption performed by Alice).
This points out that the "unrelatedness" of the keys for f and g can be obtained by either generating both independently of each other, or by producing the two keys from a Key Derivation Function. For instance, there is a master key K, and you run it through a KDF to obtain 256 bits of output; the first 128 bits will be the key for f, the other 128 bits will be the key for g. If the KDF is secure as a KDF, then the two keys will be sufficiently "unrelated".
Using the same key for both algorithms ("AES" and "HomeBrew") could prove deadly. Deriving one key from the other could also be problematic, depending on how you derive them. Deriving both keys from the same source with a KDF that is sufficiently one-way ought to be safe.
The above is the "mathematical" answer. But cryptography does not happen in the ethereal world of mathematics. At one point, you run it on actual hardware. Even with "unrelated" keys, the combination HomeBrew(AES) can still be harmful in the following ways:
Software is secure only insofar as it is reviewed and reviewed again and fixed. This is a time-consuming process. By implementing two algorithms instead of one, you doubled the quantity of code to review, and thus, mechanically, halved the available workforce for the review of either algorithm.
Computing "HomeBrew" might be fast, but certainly not as much as not computing "HomeBrew". By cascading the two algorithms, you increased the computational cost of doing encryption. This may imply increased hardware costs, or higher latency, and possibly degraded user experience. If the user experience is too much degraded, then the user may want to actually skip the encryption step, and then security has been seriously harmed.
"HomeBrew" being, well, home-brewed, it is not compatible with anything else on Earth. When you use standardized algorithms, you may have the choice between several implementations. For instance, is using normal SSL/TLS, you can rely on the existing OpenSSL library, but if at some point you get fed up with the hole-of-the-month feature of OpenSSL, you can switch to another implementation like GnuTLS, which implements the same protocol -- you can do that on one side without impacting other systems, since they all run the same standard protocol. By using a home-brewed algorithm, you squander these advantages. You are now married with a single implementation and must stick with it until the End of Times.
If you use AES(HomeBrew) instead of HomeBrew(AES), you get all the disadvantages above, and you get a brand new one as well, which is that "HomeBrew" now processes the actual plaintext, and may leak information about it through side-channels. In that configuration, you lose all the mathematical security that I was explaining above: incompetent Bob cannot reveal anything about the plaintext when he only ever sees data encrypted by Alice; but if he gets the real plaintext, then anything goes.
proportionally to your ability to keep the scheme secret and your crypto skills
Neither algorithm secrecy nor crypto skills can be measured or even quantified, so your "proportionally" is meaningless.
This is a fundamental point: as per Kerckhoffs's principle, we use public algorithms because we do not know "how much secret" a given algorithm can be, especially since it is implemented as software, and also exists as source code, and in the head of one or several developers. When we want security, we do not want just security; we also want to know how much of the stuff we have. And we cannot do that with a secret algorithm.