This looks a lot like a XY problem: you want to solve X problem doing Y, but don't know how to do Y, so you are asking about Y here.
You are trying to use true secure random numbers and don't trust urandom and want to increase the entropy pool so you can use random. Right?
Don't use /dev/random... That's why /dev/urandom exists. It's seeded by /dev/random, and uses a very strong algorithm to generate random numbers in a non-blocking way. The u on urandom usually means unlimited, so it will never run out of random numbers, unless you are using a diskless station (or a router, or a live CD distro) seconds after booting before /dev/random had time to build some entropy.
Some people will argue a lot about random/urandom, that urandom is not secure enough, that only random have true random numbers, and so on. Don't listen. Use urandom cryptographically secure pseudorandom number generator and be happy. And using random can be a liability and create an incident: it blocks. And that can lead to a DoS not only on your application, but on every other application with developers thinking that using /dev/random is the way to go.
So, if you are loading a large file full of random data, why rely on random or urandom at all? Just read the file. You can even use urandom to define the position where you will read, store the number of records read from that file, and block after reading the random file enough times. It would be terrible for security (the random file is very predictable if someone get a hold of it), performance will be worse than reading urandom, and you have to keep a look on the read count to send another file before the randomness runs out (or it blocks like random).
And Thomas Pornin already wrote about this as well, and this page debunks a lot of myths about randomness too.