Don’t know what Elmos minions are doing, but I’ve written code at least equally unefficient. It was quite a few years ago (the code was in written in perl) and I at least want to think that I’m better now (but I’m not paid to code anymore). The task was to pull in data from a CSV (or something like that, as I mentioned, it’s been a while) and it needed conversion to XML (or something similar).
The idea behind my code was that you could just configure which fields you want from arbitary source data and on where to place them on the whatever supported destination format. I still think that the basic idea behind that project is pretty neat, just throw in whatever you happen to have and have something completely else out of the other end. And it worked as it should. It was just stupidly hungry for memory. 20k entries would eat up several gigabytes of memory from a workstation (and back then it was premium to have even 16G around) and it was also freaking slow to run (like 0.2 - 0.5 seconds per entry).
But even then I didn’t need to tweet that my hard drive is overheating. I well understood that my code is just bad and I even improved it a bit here and there, but it was still so very slow and used ridiculous amounts of RAM. The project was pretty neat and when you had few hundred items to process at a time it was even pretty good, there was companies who relied on that code and paid for support. It just totally broke down with even a slightly bigger datasets.
But, as I already mentioned, my hard drive didn’t overheat on that load.
Don’t know what Elmos minions are doing, but I’ve written code at least equally unefficient. It was quite a few years ago (the code was in written in perl) and I at least want to think that I’m better now (but I’m not paid to code anymore). The task was to pull in data from a CSV (or something like that, as I mentioned, it’s been a while) and it needed conversion to XML (or something similar).
The idea behind my code was that you could just configure which fields you want from arbitary source data and on where to place them on the whatever supported destination format. I still think that the basic idea behind that project is pretty neat, just throw in whatever you happen to have and have something completely else out of the other end. And it worked as it should. It was just stupidly hungry for memory. 20k entries would eat up several gigabytes of memory from a workstation (and back then it was premium to have even 16G around) and it was also freaking slow to run (like 0.2 - 0.5 seconds per entry).
But even then I didn’t need to tweet that my hard drive is overheating. I well understood that my code is just bad and I even improved it a bit here and there, but it was still so very slow and used ridiculous amounts of RAM. The project was pretty neat and when you had few hundred items to process at a time it was even pretty good, there was companies who relied on that code and paid for support. It just totally broke down with even a slightly bigger datasets.
But, as I already mentioned, my hard drive didn’t overheat on that load.