Well, that's a second, but related myth -- that somehow content and keywords that are farther down in the code are given less weight by the search engines. Or, conversely, that you have to cluster your target phrases up near the top in order to rank well for them.
And like a lot of myths, this one has a (teensy tiny) grain of truth at its heart.
It is apparently potentially possible to build a page that's just so huge the search engines don't make it all the way to the bottom of the code. But this would have to be enormously
If memory serves, the limit used to be a file size of somewhere around 100 KB. Not lines of code -- file size. And I'm talking about just the size of the basic text file with the HTML code in it; this doesn't count any graphics or media associated with the file. I mean, two thousand lines of code isn't anywhere near what we're talking about here. I'm talking way, way, way beyond that.
This was back in the days of everybody on dial up, when a 100KB file would have taken FOREVER
to download. So it was really unlikely that anybody who had a lick of sense was going to post a file that huge in the first place. (Which didn't stop a few people, but remember that part about having "a lick of sense"? I shall say no more.
Honestly, I haven't done any testing recently to see if that limit is still in place. I can't imagine what sort of crap code you'd have to load a page up with to send the file size over 100KB.
So, anyway, starting with the idea that stuff waaaaay down at the bottom of an impossibly huge file might not get indexed, somehow this transmogrified into the notion that content lower down in the code counts for less when it comes to ranking a page.
Can that kind of thing hurt your rankings? I'd say no. Here's why.
The first thing is to check how the code shows up in the "view source" option of your browser, since that's what the search engine spiders are going to see. If that 2,000 lines of code in the raw file is, for instance, PHP that gets parsed and processed by the server before the page ever makes it to the browser, it may only translate into a few lines of actual code (or none at all, depending on what the script is intended to do).
So the content might not be as far down in the code as a review of the raw, unparsed file might indicate.
Second, even if there are 2,000 lines of code up top when you view source in the browser, most if not all of that will be discarded by Google before they index anything.
Third, if somebody's going to actively optimize a page for a phrase, that phrase should probably be spread throughout the page's content, not sequestered way down at the bottom of the page. It is possible that if a phrase only appears once at the very bottom that the page won't rank well for that phrase. But IMO that would be because the page wasn't well optimized for that phrase, not because the search engines were somehow giving text toward the bottom of the code less weight.
less taxes, adjusted for inflation.