What's so bad about using AI as a better search engine?
The other day I caught myself saying this to a friend "Oh, you're just using AI as a better search engine? You're barely scratching the surface."
As soon as I said it, I regretted it. I remembered how condescending this AI shaming was when someone said the same to me during my early AI days. "How cute, he's getting faster search results..."
Back then, I was frustrated with the amount of time it took me to sift through sponsored ads and search results to find what I needed. It was a big enough problem for me that I tried out AI as an alternative. It was able to search and sift way faster than my previous solution. I stuck with it and immediately got value. I really didn't care what the solution was. I just knew it helped me solve my problem.
Since then, I've learned to use AI for many more things. I get a bit more respect than before, but I still get some condescending looks from the AI fluent.
Here are three of the ways I use AI today:
Note taking: AI will summarize meetings and generate a series of next steps. That's great, but I still like to sketch and take notes on paper. It isn't that I don't trust AI. It's that I like to have multiple touchpoints with the information. It helps me retain more. In college, I took hand written notes, listened to lectures, read texts and rewrote notes. The more touchpoints, the more I retained. In work meetings, I used to only have two touchpoints. My retention was limited and that was a problem. Now I have three touchpoints. I retain more, which is more important than eliminating one of them.
Customer Support: In this use case, I'm the customer who needs support. Most companies are investing tons of money in AI agents that can provide customer support. In my experience, they still suck. That's a problem for me. I operate a lean business. I don't have an IT department. When one of my software tools isn't doing what I need, I have to hope that the chat, email or FAQ's will do the trick. They rarely do. Getting to a human is nearly impossible. Today, none of that matters to me because I don't use any of it. Instead, I articulate my problem to my AI tool and send it off into the abyss to find the combination of FAQ's, support blogs or whatever it needs to find me answers. AI solves my problem, just not in the way people expect. (I wrote more about this here.)
Code writing: Long ago, I wrote code as a web developer. I changed career directions over the years and my code writing skills eroded and became unnecessary. Today, I maintain my own website. I use a web platform that has templates and modules that do a lot of the work for me. Sometimes I need a module that the platform doesn't have. For that I go to AI. I explain what I'm looking for and have it write the code, plus step by step instructions for how to make it work on my site. I don't need to retain that knowledge or even know how it's working. I just need my problem solved and it does exactly that.
In those examples, I’m getting great value from AI because my usage is directly connected to my problem.
Where I’m not using AI:
Content creation:
I just wrote a book. I take topics from the book and write about them in my newsletters and on LinkedIn. So many people have told me I should just upload my book to AI and have it come up with a bunch of newsletters and LinkedIn posts to save me the trouble.
I could do that, but it's a solution to a problem I don't have. I enjoy writing these things. Writing helps me figure out what I think and how to articulate those thoughts. It’s a huge benefit. I've been surprised at how little writers block I've experienced when doing this. I'm getting real value out of my current solution.
In all of these examples I think there are two things at play.
The complexity of a solution gets more attention than the problem solving ability of a solution
Teams and companies are encouraged to "Think Big". Simple ideas are discouraged. The result is solutions in search of problems. Just because something can, does it mean it should? Simple ideas don't get headlines, even if they are solving problems.
The curse of knowledge
When I shamed my friend about their AI usage, I was suffering from the curse of knowledge. I'd forgotten what it was like when search was my main use case - when I didn't know how to use AI for the things I do now.
I recently learned about an experiment conducted by a law firm. The law firm gave the same set of AI tools to two separate groups of paralegals. The only difference was what they told each group to use them for. One was told to use AI to be more efficient and the other was told to use it for all the things they hate doing. The group told to use it for the things they hate doing had far higher usage than the other.
All of these examples highlight a huge opportunity and risk around AI. If we start from solutions, we run the risk of wasting AI's power on solving things that don't need solving. If we start from problems, we'll maximize the power of AI.