Home        About Me        Links        Blog        Webcomic

    

the r-space 

7 - Google Search, SEO and Gemini - the three deadly nuisances

Back


4/02/2025 at 16:14

SEO. You've heard it before. Those SquareSpace adverts? 'Easiest way to optimise for SEO'. Actually, that sentence doesn't make much sense considering SEO is an acronym for 'Search Engine Optimisation' so it would become 'Easiest way to optimise for Search Engine Optimisation', but no matter. 

Google's an interesting company. On one hand you have the really cool part which innovates and has a great work culture for employees (in some cases). On the other side of Google is the corporate drywall with stickers plastered (no pun intended) all over it consisting of things like 'AdSense', 'Data collection', 'The Future is AI large language models', and 'SEO over valid information'.

I have to admit, Google Search is horrifically terrible. Advertising has always been there but in recent years it seems that companies have found ways to completely ruin valid search results by doing enough bidding for the top search result. There's a reason SEO is so important to tech companies, and it's because it means their website will be seen first by potential consumers, meaning more revenue. So in all cases good for the company. But it is not good, however, for the consumer.

Trying to look up tutorials, for example, is difficult. Sometimes you find these random bot-generated 'how-to' guides for things where they just state the problem over and over again, with no solution, then try to sell you their software service, etc. It's not always Google's fault, more so the fault of companies who try their hardest to optimise for SEO in the first place, but what has resulted is a completely unusable at times search engine which constantly regurgitates misinformation. 

So what has Google done to rectify this problem? Oh, but of course! As any budding executive at tech companies will say at conferences and boardrooms across the globe, the solution is generative AI. Artificial Intelligence. Google has their own, Gemini, and in every respect it's literally the same thing as ChatGPT, but Google flavoured. But you know what they decided to do? Force it as the first search result on pretty much every Google search. 

What Gemini does is crawl the first few search results, and then try to answer your search with an answer comprising of the aformentioned sources. Now this would be great if the top search results were valid and trusted sources of information, but what do you think are the top search results as of now? SEO crap force-fed by companies and Reddit comments. So what you often get is a completely incorrect answer that the AI 'hallucinates' as correct.

Now in terms of most people, this isn't an issue. You can just scroll past and move on, finding a good search result on the second page or so. But what about less tech-savvy people who trust Google? If they look up 'What is a recommended diet for my rock-climbing hobby', the AI will likely try and tell you that you should eat 2-3 rocks per day as a balanced diet. The source? Joke Reddit comments. I get these are extremes but the very fact that it's possible for this to happen goes to show that AI isn't taking over the world any time soon. 

My favourite description of AI is 'souped up predictive text', because in essence that's how large language models work. They read a massive database of text crawled from the internet, and combine words to form answers. The entire tech industry has this fanaticism with shoehorning AI into everything they possibly can because it 'optimises' things. Correct me if I'm wrong, but is Copilot AI's inclusion in Microsoft Teams for making homework assignments really necessary? I mean, the average teacher most likely knows how to tell their students what to do for homework, and certainly can do it fast enough that it isn't a chore or a time-consuming process. So why does AI need to be in it? Simple. Microsoft gets to say how many of their products is 'improved' by their Copilot AI, meaning more investment.

I'm no economist (and because of this fact please take my opinions with a pinch of salt), but it seems like some form of AI bubble is and has been forming for a while now since the initial ChatGPT craze began. The dot-com bubble became so big because of the potentials of the World Wide Web, and investors were all-in on nearly every single startup company that had any fleeting mention of the internet, no matter how absurd the proposition was (or if it was even possible). AI seems to be going in this direction; not from the actions of consumers, but more so the actions of tech companies that want to force it into every single computerised thing possible.

To put it simply: Not everything needs AI. It can be helpful at times but the very fact that it's possible for AI models to 'hallucinate' (essentially come up with a completely incorrect answer whilst having the utmost confidence that it is not) means it can't be used as a reliable source of information. Especially Google Search. Do we need to go back to Yahoo or Ask Jeeves?

(P.S. Apologies for the absence, hopefully I'll have more positive blog posts soon.)

(P.P.S. One of my favourite blogs to read is over at ludic.mataroa.blog, which has some of the best and most intriguing portrayals of what I've complained about here regarding AI. Please read every single blog post he's ever made.)

 

(C) 2025 RSpace (@RSpaced)